Give Your CrewAI Agents
Persistent Memory
CrewAI is one of the most popular multi-agent frameworks. You define agents with roles, give them tasks, and let them collaborate. But when the crew finishes a run, everything it learned disappears. The next run starts from scratch.
CrewAI ships with a Memory class backed by LanceDB. It works within a single session. Across sessions, across projects, across different crews? That storage was not built for persistence at that level. And no third-party memory system has built a CrewAI integration either. Not Mem0, not Zep, not Letta.
We built one. OMEGA now ships an official CrewAI storage backend that persists all agent memories in a local SQLite database. Semantic deduplication, contradiction detection, time decay, graph relationships. All running locally, no API keys needed for the storage layer.
What CrewAI's Default Memory Misses
CrewAI's built-in memory uses LanceDB as its default storage backend. It handles in-session recall: an agent finishes a task, stores the result, and another agent in the same crew can retrieve it. That part works fine.
The problems show up between runs. Your research crew discovers that a particular data source is unreliable. It notes this as a lesson. Next week, a new run of the same crew has no record of that lesson. It trusts the unreliable source again.
The default storage also lacks deduplication. Run the same crew ten times and you get ten copies of similar observations cluttering retrieval. There is no contradiction detection, so conflicting memories coexist without any flag. And there are no typed relationships between memories, so the system cannot tell you that a decision from Tuesday superseded one from Monday.
Three Lines to Set Up
$ pip install omega-memory crewai $ omega setup
The omega setup command downloads the embedding model (bge-small-en-v1.5, ~33MB) and creates the ~/.omega/ data directory. That is all the infrastructure you need. No Docker, no Postgres, no cloud account.
Quick Start: Crew with Persistent Memory
from crewai import Agent, Task, Crew
from integrations.crewai_memory import OmegaMemory
# Create OMEGA-backed memory
memory = OmegaMemory(project="my-research-crew")
# Use it with a Crew
researcher = Agent(
role="Senior Researcher",
goal="Find cutting-edge AI developments",
backstory="You are an expert AI researcher.",
memory=memory, # Agent uses OMEGA for memory
)
task = Task(
description="Research the latest advances in multi-agent systems.",
expected_output="A summary of key developments.",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
memory=memory, # Crew-level shared memory
)
result = crew.kickoff()
# Memories from this run persist in OMEGA for future sessionsThe project parameter scopes memories so different crews do not collide. Your research crew and your coding crew each have their own memory namespace, all stored in the same local database.
Works with CrewAI Flows Too
CrewAI Flows are the newer orchestration primitive for building stateful, multi-step agent workflows. OMEGA works with them the same way:
from crewai.flow.flow import Flow, start
from integrations.crewai_memory import OmegaMemory
class ResearchFlow(Flow):
memory = OmegaMemory(project="research-flow")
@start()
def begin(self):
# Store knowledge
self.remember("Always cite primary sources, not secondary reviews")
# Recall relevant memories from any previous session
results = self.recall("citation preferences")
for match in results:
print(f"[{match.score:.2f}] {match.record.content}")The remember() and recall() calls route through OMEGA's storage backend. Memories stored during one flow run are available in every subsequent run. You can build workflows that accumulate knowledge over weeks of execution.
Category Mapping
CrewAI uses category strings to classify memories. OMEGA uses typed event identifiers with different TTLs and retrieval weights. The integration maps between them automatically:
| CrewAI Category | OMEGA Event Type |
|---|---|
task_result | task_completion |
observation | lesson_learned |
decision | decision |
error | error_pattern |
preference | user_preference |
lesson | lesson_learned |
summary | session_summary |
This mapping matters because OMEGA treats each event type differently. A user_preference never expires. A session_summary decays after 7 days. An error_pattern stays relevant for 14 days. Your crew does not need to manage any of this. The mapping handles it.
What OMEGA Adds to CrewAI
Advanced Configuration
The OmegaMemory() factory covers most use cases. For fine-grained control, you can configure the storage backend and memory parameters separately:
from integrations.crewai_memory import OmegaStorage
from crewai.memory import Memory
# Custom OMEGA storage with project scoping
storage = OmegaStorage(
project="finance-agents",
agent_type="crewai",
omega_home="~/.omega", # custom data directory
)
# Full control over Memory parameters
memory = Memory(
storage=storage,
llm="anthropic/claude-3-haiku-20240307", # any litellm model
recency_weight=0.2, # less weight on recency
semantic_weight=0.6, # more weight on semantic match
importance_weight=0.2,
consolidation_threshold=0.9, # higher = less aggressive merging
)The storage backend is separate from the LLM used for memory analysis. OMEGA handles all embedding and retrieval locally. The LLM parameter controls CrewAI's own analysis layer (scope inference, consolidation logic). You can use any model supported by litellm.
Known Limitations
We want to be upfront about what this integration does not do:
- Embedding mismatch. OMEGA generates its own 384-dim embeddings (bge-small-en-v1.5). CrewAI's default OpenAI embeddings (1536-dim) are not used for storage. When CrewAI passes a pre-computed embedding vector to
search(), the integration falls back to listing recent records. For best results, userecall()on the Memory object, which routes through text-based search. - No bulk reset. Calling
reset()is a no-op. OMEGA does not support bulk deletion. Useomega consolidatefrom the CLI instead. - Flat scoping. OMEGA uses project-based scoping rather than CrewAI's hierarchical
/company/team/projectpaths. The integration extracts the last path segment as the project identifier.
Under the Hood
Every CrewAI memory operation maps to an OMEGA bridge function:
| CrewAI Operation | OMEGA Backend |
|---|---|
memory.remember(text) | omega.bridge.store(text, event_type) |
memory.recall(query) | omega.bridge.query_structured(query) |
memory.forget(...) | omega.bridge.delete_memory(id) |
memory.update(id, ...) | omega.bridge.edit_memory(id, content) |
The integration is a single Python file (crewai_memory.py) with no additional dependencies beyond omega-memory and crewai. It implements CrewAI's StorageBackend protocol, so it plugs in without monkey-patching or framework modifications.
Related reading
OMEGA is free, local-first, and Apache 2.0 licensed.