OMEGA Platform Architecture Redesign
Date: 2026-03-19 Status: Draft Author: Jason Sosa + Claude Stakeholders: Jason Sosa, Michael Anton Fischer
1. Problem Statement
The OMEGA project has grown from a standalone memory engine into a multi-layered platform encompassing memory, multi-agent orchestration, an admin dashboard, and a marketing site. The current architecture has several problems:
- AI agent confusion: Two repositories (private
omega, publicomega-memory) contain overlapping code. Agents hallucinate pro features when working on memory-only code and stumble when they encounter the dual-repo structure. - No true multi-tenancy: The admin dashboard was built for a single user. The "Jimmy incident" (pre-March 2026) exposed that new users could see the owner's data via OAuth. Patches added
user_idfiltering and RLS, but there is no organizational hierarchy — no concept of entities-as-tenants or project-level data isolation. - Supabase lock-in: Auth, database queries, and RLS are hardcoded to Supabase. This blocks on-premise deployment for enterprise customers and limits self-hosting options.
- Unclear boundaries: Memory, orchestration, admin dashboard, and marketing site all live in one private repo. Contributors to the open-source memory engine have no clean entry point.
Background
These issues were discussed in a March 18, 2026 meeting between Jason Sosa and Michael Anton Fischer. Key points from that meeting:
- Michael recommended making memory a submodule of a separate orchestration system to cleanly separate the two halves of the project.
- Multi-tenancy must be enforced at the core code level, not tacked on as an afterthought.
- The current codebase is completely tied to Supabase and lacks actual auth enforcement in the core.
- The project needs to support multi-tenant isolation where each tenant also needs internal multi-tenancy (per project, per entity).
- Michael built a proof-of-concept Astro dashboard with multi-provider OIDC auth and a cloud backend abstraction layer (separate PR).
Licensing Note
The March 18 meeting discussed Polyform Small Business license. This was subsequently ruled out — it violates the Kokyō Keishō Zaidan Stichting Charter Section 11 (core code must remain open source). The architecture must support Apache-2.0 for memory and a separate commercial license for the orchestration/platform layer. This further reinforces the need for clean repo separation.
2. Goals
- Eliminate AI agent confusion by separating memory, orchestration, and marketing into distinct repositories with clear boundaries and scoped
.claude/rules. - Implement multi-tenant data isolation with a User → Entity → Project → Data hierarchy, enforced at the database level via RLS and at the application level via a DataStore interface.
- Abstract database and auth behind provider-agnostic interfaces to unblock on-premise deployment and "bring your own Supabase" self-hosting.
- Protect pro features by ensuring orchestration source code never appears in public repositories. Only the memory engine is open source.
- Enable the admin dashboard as a SaaS product — an "agent intelligence OS" where solo founders, agencies, and enterprises manage agents, jobs, projects, entities, and memories across multiple businesses.
Non-Goals
- Migrating away from Supabase (it remains the primary backend; we add abstraction for future portability).
- Building the on-premise Docker deployment (Phase 6; designed now, built later).
- Implementing team/collaboration features within entities (future; the schema supports it).
- Changing the Python memory engine's public API (omega-memory's interface stays stable).
- Migrating the marketing site framework (stays Next.js).
3. Architecture Decision: Sentry Model
Research Summary
We evaluated repository patterns from 10 major open-source projects with commercial layers:
| Pattern | Used By | Description |
|---|---|---|
ee/ directory in same repo | GitLab, PostHog, Cal.com, n8n | Enterprise code visible to all contributors |
| Private repo wraps public | Sentry, Grafana | Private repo imports public as dependency, extends it |
| Fully separate repos | Temporal, Dagster, Airbyte | Cloud is architecturally distinct product |
| Monorepo, env-var gated | Supabase | Everything in one repo, features toggled at runtime |
No project uses git submodules for this purpose.
Decision: Private repo wraps public (Sentry Model)
Rationale:
- Pro code is invisible to contributors — the public repo contains only memory.
- Memory can evolve independently with community contributions.
- The private repo extends memory via pip dependency, adding orchestration and the admin dashboard.
- On-premise customers receive the private repo's Docker image — no memory internals exposed.
- Sentry validates this pattern at scale:
getsentry/sentry(open, Python+TS) is extended bygetsentry/getsentry(private, SaaS layer). Extension mechanisms include Django signals, swappable backends, and feature flags.
Why not ee/ directory (GitLab/PostHog model): Jason explicitly requires that pro code be invisible to contributors. The ee/ pattern makes enterprise code visible to anyone who clones the repo.
Why not fully separate repos (Temporal/Dagster model): The orchestration layer is tightly coupled to memory — it extends it, not replaces it. The overhead of fully decoupled repos isn't justified.
4. Repository Structure
Three Repositories
| Repository | Visibility | Purpose | Deploys To |
|---|---|---|---|
omega-memory/omega-memory | Public | Standalone memory engine | PyPI (omega-memory) |
singularityjason/omega | Private | Orchestration + Admin dashboard | Vercel (omegamax.co/admin) + Docker (on-prem) |
singularityjason/omega-website | Private | Marketing site, blog, docs | Vercel (omegamax.co) |
4.1 omega-memory (Public)
omega-memory/omega-memory
├── src/omega/
│ ├── __init__.py
│ ├── sqlite_store/ ← storage engine
│ ├── bridge.py ← high-level memory API
│ ├── embedding.py ← ONNX embeddings
│ ├── schema.py ← data models
│ ├── types.py ← type definitions
│ ├── exceptions.py ← error types
│ ├── json_compat.py ← JSON utilities
│ ├── preferences.py ← user preferences
│ ├── plugins.py ← plugin interface
│ ├── crypto.py ← encryption
│ ├── cli.py ← CLI interface
│ └── server/
│ ├── mcp_server.py ← basic MCP server (memory tools only)
│ ├── handlers.py ← core tool handlers
│ └── tool_schemas.py ← core tool schemas
├── tests/ ← memory-only tests
├── hooks/ ← core hooks (subset of current hooks)
├── docs/
├── pyproject.toml ← publishable as omega-memory on PyPI
├── LICENSE ← Apache-2.0
└── README.md
Installs as: pip install omega-memory
Import: from omega.bridge import ..., from omega.sqlite_store import ...
4.2 omega (Private — Platform)
singularityjason/omega
├── src/omega_platform/
│ ├── orchestrator/
│ │ ├── coordination.py ← multi-agent coordination
│ │ ├── conflicts.py ← conflict resolution
│ │ ├── coord_reliability.py ← DLQ, circuit breaker
│ │ └── sandbox/ ← agent tool sandboxing (new)
│ ├── hooks/
│ │ ├── fast_hook.py ← full dispatcher with coordination
│ │ ├── pre_file_guard.py
│ │ ├── pre_commit_guard.py
│ │ ├── coord_session_start.py
│ │ ├── coord_session_stop.py
│ │ ├── coord_heartbeat.py
│ │ ├── auto_capture.py
│ │ └── ... ← remaining hooks
│ ├── server/
│ │ ├── coord_handlers.py ← coordination tool handlers
│ │ ├── coord_schemas.py ← coordination tool schemas
│ │ ├── hook_server/ ← hook server implementation
│ │ ├── auth.py ← server auth
│ │ └── jit_proxy.py
│ ├── cloud/ ← Supabase sync engine
│ ├── entity/ ← entity management
│ ├── knowledge/ ← knowledge base
│ ├── oracle/ ← oracle/router
│ ├── profile/ ← user profiles
│ ├── protocol.py ← OMEGA protocol
│ ├── advisor.py ← advisory engine
│ ├── pattern_learner.py
│ ├── thompson.py
│ └── license.py ← license validation
│
├── admin/ ← Next.js dashboard (THE product)
│ ├── app/
│ │ ├── admin/ ← multi-tenant admin UI
│ │ │ ├── [entitySlug]/ ← entity-scoped views
│ │ │ └── settings/ ← account settings
│ │ ├── api/
│ │ │ └── admin/ ← admin API routes
│ │ └── login/ ← admin auth
│ ├── lib/
│ │ ├── db/
│ │ │ ├── interface.ts ← DataStore interface
│ │ │ ├── supabase.ts ← Supabase adapter
│ │ │ ├── postgres.ts ← Raw Postgres (on-prem, future)
│ │ │ └── sqlite.ts ← SQLite (air-gapped, future)
│ │ └── auth/
│ │ ├── interface.ts ← AuthProvider interface
│ │ ├── supabase.ts ← Supabase OAuth (current)
│ │ └── oidc.ts ← Generic OIDC (enterprise, future)
│ ├── components/ ← admin UI components
│ ├── hooks/ ← React hooks
│ ├── Dockerfile ← on-prem standalone build
│ └── package.json
│
├── supabase/
│ └── migrations/ ← all Supabase migrations
│
├── docker-compose.yml ← on-prem: admin + postgres + omega
├── pyproject.toml ← depends on omega-memory
├── Makefile ← dev setup automation
└── LICENSE ← commercial
Python imports:
from omega.bridge import store_memory # from omega-memory (pip)
from omega_platform.orchestrator import ... # from this repo
from omega_platform.server.coord_handlers import ...
4.3 omega-website (Private — Marketing)
singularityjason/omega-website
├── app/
│ ├── page.tsx ← landing page
│ ├── blog/ ← blog
│ ├── docs/ ← documentation
│ ├── pricing/ ← pricing page
│ ├── compare/ ← comparison pages (vs Mem0, etc.)
│ ├── pro/ ← pro feature marketing
│ └── odyssey/ ← odyssey page
├── components/
├── lib/
├── public/
├── package.json
├── next.config.ts
└── vercel.json ← deploys to omegamax.co
Source code is private. The website is publicly accessible via browser at omegamax.co.
4.4 Domain Routing
omegamax.co/ → omega-website (Vercel project)
omegamax.co/blog/* → omega-website
omegamax.co/docs/* → omega-website
omegamax.co/pricing → omega-website
omegamax.co/admin/* → omega admin (Vercel project)
omegamax.co/api/admin/* → omega admin (Vercel project)
Implemented via Vercel path-based routing across two projects on the same domain, or via rewrites in the marketing site's next.config.ts.
5. Multi-Tenant Data Architecture
5.1 Tenant Hierarchy
User (pro subscriber — founder, agency owner, consultant)
├── Entity A (a business, client, or account they manage)
│ ├── Project 1 (a product, initiative, or campaign)
│ │ ├── Agent Sessions
│ │ ├── Memories
│ │ ├── Jobs
│ │ └── Coordination data
│ └── Project 2
│ └── ...
├── Entity B (another business)
│ └── Project 3
└── Entity C (a consulting client)
└── ...
Each pro user manages multiple entities. Each entity contains multiple projects. All data (memories, sessions, tasks, jobs) belongs to exactly one entity and optionally one project within that entity.
5.2 Database Schema
Naming Collision: entities vs workspaces
The existing entities table (migration 20260225100000) stores tracked people and companies (entity graph data). The multi-tenant hierarchy needs a table for businesses/accounts a user manages. To avoid a semantic collision, the tenant-level table is named workspaces:
workspaces= businesses/clients the user manages (tenant boundary)entities= tracked people, companies, organizations within a workspace (existing table, kept as-is)
Existing Schema Issues to Address
- Entity ID type: The existing
entitiestable usesid TEXT PRIMARY KEY. All foreign keys referencingentity_idare TEXT. This spec does NOT propose migrating to UUID — the existing TEXT IDs remain. The newworkspacestable uses UUID. Theworkspace_idcolumn on data tables is UUID. - Missing
user_idon coord tables: Thecoord_sessions,coord_tasks,coord_messages,coord_handoffs,coord_intents,coord_decisions,coord_git_events,coord_metrics,coord_file_claims, andcoord_file_readstables do NOT currently haveuser_idcolumns (despite earlier multi-tenant work that addeduser_idtomemories,tweets, and other tables). Bothuser_idandworkspace_idmust be added to all coord tables. - Existing
projectsandentity_projectstables: Migration20260302110000created aprojectstable. Migration20260318010000createdentity_projects. These must be reconciled —entity_projectsevolves to become workspace-scoped projects, and the earlierprojectstable is deprecated or merged.
New tables:
-- User's subscription account
CREATE TABLE user_accounts (
user_id UUID PRIMARY KEY REFERENCES auth.users,
plan TEXT NOT NULL DEFAULT 'free', -- free/pro/enterprise
license_key TEXT,
settings JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now()
);
-- Workspaces the user manages (tenant boundary)
CREATE TABLE workspaces (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES auth.users,
name TEXT NOT NULL,
workspace_type TEXT, -- business/client/personal
slug TEXT NOT NULL, -- URL segment: /admin/acme-corp
settings JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(user_id, slug)
);
-- Projects within a workspace
CREATE TABLE workspace_projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL REFERENCES workspaces ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES auth.users, -- denormalized for RLS
name TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'active',
settings JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
Modified tables — all data tables receive workspace_id and user_id (denormalized):
-- Memories (modified — user_id exists on some rows, needs backfill)
ALTER TABLE memories
ADD COLUMN workspace_id UUID REFERENCES workspaces;
-- user_id already exists (added in migration 20260302100000, nullable, needs backfill)
-- Coordination tables — BOTH user_id AND workspace_id must be added
-- (user_id does NOT currently exist on coord tables)
ALTER TABLE coord_sessions
ADD COLUMN user_id UUID REFERENCES auth.users,
ADD COLUMN workspace_id UUID REFERENCES workspaces;
ALTER TABLE coord_tasks
ADD COLUMN user_id UUID REFERENCES auth.users,
ADD COLUMN workspace_id UUID REFERENCES workspaces;
-- Same pattern for ALL coord tables:
-- coord_messages, coord_handoffs, coord_intents, coord_decisions,
-- coord_git_events, coord_metrics, coord_file_claims, coord_file_reads
-- Other data tables (user_id exists, add workspace_id):
-- tweets, engagement_suggestions, pending_events, job_approvals
-- Platform-global tables (NO workspace_id needed):
-- allowed_users, user_identities, licenses, pro_releases, download_events,
-- website_llm_usage, notification_settings
Indexes (critical for RLS performance):
CREATE INDEX idx_entities_user ON entities(user_id);
CREATE INDEX idx_projects_entity ON projects(entity_id);
CREATE INDEX idx_projects_user ON projects(user_id);
CREATE INDEX idx_memories_user ON memories(user_id);
CREATE INDEX idx_memories_entity ON memories(entity_id);
CREATE INDEX idx_memories_project ON memories(project_id);
CREATE INDEX idx_coord_sessions_user ON coord_sessions(user_id);
CREATE INDEX idx_coord_sessions_entity ON coord_sessions(entity_id);
-- ... same pattern for all data tables
5.3 Row Level Security
-- User isolation on workspaces
CREATE POLICY user_isolation ON workspaces
USING (user_id = auth.uid())
WITH CHECK (user_id = auth.uid());
-- User isolation on workspace projects (via denormalized user_id)
CREATE POLICY user_isolation ON workspace_projects
USING (user_id = auth.uid())
WITH CHECK (user_id = auth.uid());
-- User isolation on all data tables (same pattern)
CREATE POLICY user_isolation ON memories
USING (user_id = auth.uid())
WITH CHECK (user_id = auth.uid());
-- Repeat for every data table: coord_sessions, coord_tasks,
-- coord_messages, entities (tracked), etc.
Why user_id = auth.uid() and not a join-based policy:
- Single column check is the fastest possible RLS evaluation.
- No joins to
workspacesor membership tables — the denormalizeduser_idmakes this O(1). - Workspace and project scoping happens at the application layer (the DataStore filters by
workspace_idandproject_id), with RLS as the safety net ensuring a user can never access another user's data even if application code has a bug. - When team features are added later, the RLS policy extends to include a
workspace_collaboratorstable — but the baseuser_idcheck remains as defense-in-depth.
5.4 DataStore Interface
// admin/lib/db/interface.ts
export interface MemoryFilters {
workspaceId: string;
projectId?: string;
eventType?: string;
memoryType?: string;
search?: string;
limit?: number;
offset?: number;
}
export interface TrackedEntityFilters {
entityType?: string;
status?: string;
}
export interface SessionFilters {
workspaceId: string;
projectId?: string;
status?: string;
}
export interface DataStore {
// User account
getUserAccount(userId: string): Promise<UserAccount | null>;
// Workspaces (businesses/clients the user manages)
getWorkspaces(userId: string): Promise<Workspace[]>;
getWorkspace(userId: string, workspaceSlug: string): Promise<Workspace | null>;
createWorkspace(userId: string, data: CreateWorkspaceInput): Promise<Workspace>;
updateWorkspace(userId: string, workspaceId: string, data: UpdateWorkspaceInput): Promise<Workspace>;
// Projects within a workspace
getProjects(userId: string, workspaceId: string): Promise<Project[]>;
getProject(userId: string, projectId: string): Promise<Project | null>;
createProject(userId: string, workspaceId: string, data: CreateProjectInput): Promise<Project>;
// Memories
getMemories(userId: string, filters: MemoryFilters): Promise<Memory[]>;
getMemoryGraph(userId: string, workspaceId: string, projectId?: string): Promise<GraphData>;
// Tracked entities (people/companies within a workspace)
getTrackedEntities(userId: string, workspaceId: string, filters?: TrackedEntityFilters): Promise<TrackedEntity[]>;
getEntityGraph(userId: string, workspaceId: string): Promise<GraphData>;
// Coordination
getSessions(userId: string, filters: SessionFilters): Promise<Session[]>;
getTasks(userId: string, workspaceId: string): Promise<Task[]>;
getMessages(userId: string, sessionId: string): Promise<Message[]>;
getHandoffs(userId: string, workspaceId: string): Promise<Handoff[]>;
// Jobs
getJobs(userId: string, workspaceId: string): Promise<Job[]>;
// Dashboard metrics
getDashboardMetrics(userId: string, workspaceId: string): Promise<DashboardMetrics>;
}
Implementations:
SupabaseStore— uses Supabase client with RLS (ships first)PostgresStore— rawpgclient for on-premise Postgres (Phase 6)SQLiteStore— for air-gapped enterprise deployments (Phase 6)
5.5 AuthProvider Interface
// admin/lib/auth/interface.ts
export interface User {
id: string;
email: string;
name?: string;
avatarUrl?: string;
}
export interface AuthSession {
user: User;
accessToken: string;
expiresAt: Date;
}
export interface AuthProvider {
getCurrentUser(request: Request): Promise<User | null>;
validateSession(request: Request): Promise<AuthSession | null>;
getUserAccount(userId: string): Promise<UserAccount | null>;
signIn(credentials: unknown): Promise<AuthSession>;
signOut(request: Request): Promise<void>;
}
Implementations:
SupabaseAuth— current Supabase OAuth + password/passkey (ships first)OIDCAuth— generic OpenID Connect for enterprise SSO (Phase 6)
5.6 "Bring Your Own Supabase" Flow
For self-hosted pro users who want their data on their own Supabase instance:
- User navigates to
/admin/settings→ "Database" section. - Enters their Supabase project URL, anon key, and service role key.
- Credentials are stored encrypted in
user_accounts.settingson the managed platform database. - On first connect, the admin runs a schema version check against the user's Supabase. If migrations are needed, it provides a one-click migration runner or downloadable SQL.
- All subsequent data queries for that user create a
SupabaseStorewith the user's credentials, pointing to the user's database. - The managed platform database stores only:
user_accounts,entities(slugs/metadata), and encrypted credentials. All memories, sessions, and coordination data live on the user's Supabase.
6. Admin UI Structure
6.1 URL Hierarchy
/admin → redirect to default workspace or workspace picker
/admin/settings → account settings (subscription, license, database config)
/admin/:workspaceSlug → workspace dashboard (overview metrics, recent activity)
/admin/:workspaceSlug/projects → project list for this workspace
/admin/:workspaceSlug/projects/:slug → project detail (agents, memories, jobs)
/admin/:workspaceSlug/agents → active agent sessions across all projects
/admin/:workspaceSlug/memories → memory graph for this workspace
/admin/:workspaceSlug/entities → tracked entities (people, companies) within this workspace
/admin/:workspaceSlug/jobs → job queue and audit log
/admin/:workspaceSlug/insights → AI-generated insights for this workspace
/admin/:workspaceSlug/settings → workspace-level settings
6.2 Navigation
Sidebar:
[Workspace Picker Dropdown] ← switch between workspaces (businesses/clients)
─────────────────────
Dashboard ← workspace overview
Projects ← projects within workspace
Agents ← coordination/sessions
Memories ← memory graph
Entities ← tracked people/companies
Jobs ← job queue
Insights ← AI insights
─────────────────────
Settings (workspace)
Settings (account) ← subscription, DB config
6.3 Workspace Picker
When a user has multiple workspaces, the sidebar shows a dropdown at the top. Selecting a workspace scopes all views to that workspace's data. The current workspace slug is in the URL, so deep links work and browser history is workspace-aware.
7. New User Onboarding
7.1 Research-Backed Design Principles
Based on analysis of Linear, Vercel, Supabase, Sentry, Stripe, Datadog, Fly.io, and others:
- Progressive disclosure: Collect 2-3 fields at signup, defer everything else. Never gate first-value behind email verification or profile completion.
- Smart defaults: Derive workspace name from context (email domain, git remote). Auto-generate slug. Reduce decisions.
- First-value moment: Generate a verification step that proves the product works (Sentry's "Throw Sample Error" pattern). For OMEGA: first memory stored + visible in dashboard.
- Self-hosted divergence point: Keep everything after admin setup identical between managed and self-hosted. Only the endpoint URL changes.
- CLI-to-dashboard bridge: Use an API key or token as the connection mechanism (Datadog pattern), with browser OAuth for interactive setup (Vercel pattern).
7.2 Onboarding Flow
MANAGED PATH:
Signup (OAuth/email)
→ Create first workspace (name only, slug auto-generated)
→ Choose: "Connect CLI" or "Explore dashboard"
→ If CLI: copy config snippet with API key + workspace_id
→ If dashboard: show demo data / getting-started checklist
→ First value: memory appears in dashboard
SELF-HOSTED PATH (BYOS):
Signup (OAuth/email)
→ Create first workspace
→ "Connect your database" wizard
→ Enter Supabase URL + keys (or Postgres connection string)
→ Auto-run schema migrations against their instance
→ Verify connection
→ Same CLI/dashboard flow as managed
ON-PREM PATH:
Docker deployment (admin creates instance)
→ First user becomes admin (like GitLab CE)
→ Configure auth (OIDC provider URL, client ID)
→ Create first workspace
→ Same CLI/dashboard flow
7.3 Signup → First Workspace (Web)
Step 1: Account creation (1 screen)
- Sign up via Supabase OAuth (Google, GitHub) or email/password
- Creates
user_accountsrow withplan: 'free' - No email verification gate — user proceeds immediately (verification can happen async)
Step 2: Create first workspace (1 screen, 1 required field)
- Field: Workspace name (e.g., "Acme Corp", "My Agency", "Personal")
- Auto-generated: slug from name (editable)
- Optional: workspace type (business / client / personal) — defaults to "business"
- Optional: icon/avatar — small personalization step increases commitment (the IKEA effect)
- Creates
workspacesrow - Skippable for managed-only users? No — workspace is required to scope all data
Step 3: Choose path (1 screen, 2 options)
┌─────────────────────┐ ┌─────────────────────┐
│ Connect your CLI │ │ Explore Dashboard │
│ │ │ │
│ Already running │ │ See what OMEGA can │
│ omega serve? │ │ do with sample data │
│ Connect it now. │ │ │
└─────────────────────┘ └─────────────────────┘
7.4 CLI Connection (the bridge)
Pattern: API key + config file (Datadog/Sentry hybrid)
When the user chooses "Connect your CLI", the dashboard:
- Generates a workspace API key (stored in
workspaces.settings.api_keys[]) - Displays a ready-to-copy config snippet:
# Add to ~/.omega/config.yaml
user_id: "abc-123-def"
workspace: "acme-corp"
workspace_key: "omk_live_xxxxxxxxxxxx"
sync:
enabled: true
url: "https://omegamax.co/api/sync" # or their own Supabase URL for BYOS
- User pastes into their config, restarts
omega serve - CLI connects, syncs first batch of local memories to dashboard
- Dashboard shows "Connection successful! X memories synced" — the first-value moment
For CI/headless environments: OMEGA_WORKSPACE_KEY environment variable (no browser needed).
Alternative: omega link command (Vercel pattern, future enhancement):
omega link
# Opens browser → user authenticates → selects workspace → writes config automatically
7.5 BYOS Database Setup (self-hosted Supabase)
When a user chooses "Connect your database" in the BYOS flow:
- Enter credentials: Supabase URL + anon key + service role key (or Postgres connection string for on-prem)
- Connection test: Dashboard pings the database to verify connectivity
- Schema check: Compares the user's database schema version against the required version
- Auto-migration: If schema is behind, show the required migrations with a "Run migrations" button (or provide downloadable SQL for review-first users)
- Verification: Insert a test row, read it back, delete it — confirms RLS and write access work
- Store credentials: Encrypted in
user_accounts.settingson the managed platform (for routing future requests)
Error handling: If credentials are wrong, connection fails, or migrations can't run — show clear error messages with next steps, never silently fail.
7.6 Getting-Started Checklist (post-onboarding)
Persistent in the dashboard sidebar until dismissed (Stripe/Sentry pattern):
Getting Started [3/6 complete]
✅ Create your workspace
✅ Connect your CLI
✅ Store your first memory
☐ Create a project
☐ Set up your first entity
☐ Explore the memory graph
Each checklist item links to the relevant dashboard section or documentation. Completing all items triggers a "You're all set!" dismissal with an optional tour of advanced features.
7.7 Demo/Sample Data (optional, future)
For users who choose "Explore Dashboard" before connecting their CLI, pre-populate the workspace with sample data:
- 50 sample memories across 3 entity types
- 5 sample agent sessions with coordination events
- 2 sample projects with memories scoped to each
- A pre-built memory graph visualization
Sample data is clearly labeled as demo content and can be deleted with one click when the user is ready to use real data. This follows Linear's "interactive demo workspace" pattern.
8. Python Integration
7.1 omega serve Configuration
When a user runs omega serve locally, they configure their workspace and project context:
# ~/.omega/config.yaml
user_id: "uuid-from-auth"
default_workspace: "acme-corp"
default_project: "product-launch"
sync:
enabled: true
backend: "supabase" # or "postgres", "sqlite"
supabase_url: "https://xxx.supabase.co"
supabase_key: "..."
All memories, sessions, and coordination data created by the local MCP server are tagged with workspace_id and project_id from this config. The sync engine pushes to the configured backend with these tags intact.
7.2 omega_platform Extending omega
The private omega_platform package extends the public omega package:
# omega_platform extends omega's MCP server with coordination tools
from omega.server.mcp_server import create_server # from omega-memory
from omega_platform.server.coord_handlers import register_coord_tools
server = create_server()
register_coord_tools(server) # adds coordination, hooks, etc.
The extension mechanism follows Sentry's pattern: the public package defines clean extension points (tool registration, hook system, plugin interface), and the private package plugs into them.
9. Migration Strategy
Current State
- Jason is the only admin user. Jimmy and Michael are not using the dashboard.
- All existing data belongs to Jason's account.
- The public repo (
omega-memory) exists but is a manual sync target, ~40 commits behind. - The marketing site and admin dashboard are co-located in
~/Projects/omega/website/.
Phase 1: Extract Memory into Standalone Repo
Goal: omega-memory/omega-memory becomes the source of truth for memory code.
- Clean up
omega-memoryrepo — ensure it builds and tests independently with no pro code. - Verify all memory-only tests pass standalone.
- Publish updated version to PyPI.
- Delete
sync-manifest.yamlandscripts/sync-to-public.pyfrom private repo. - Add
omega-memoryas pip dependency in private repo'spyproject.toml.
Risk: Medium — import paths may break. Mitigated by the fact that omega-memory already publishes as omega on PyPI.
Can run in parallel with: Phase 3.
Phase 1b: Resolve DIVERGED Files
Goal: Split the 8 DIVERGED files into clean public base + private extension.
The following files exist in both repos with different content (pro code interleaved with core code):
bridge.py— 5+ lazy imports fromomega.coordinationwoven into function bodiesmcp_server.py— 23+ imports from pro modules (license, coordination, hook_server, embedding_daemon, pid_registry)handlers.py— core tool handlers + pro tool registrationtool_schemas.py— core schemas + pro schemasembedding.py— core ONNX engine + pro embedding daemon clientplugins.py,types.py,__init__.py— minor divergencessqlite_store/— package with core storage + pro extensions
Resolution strategy per file:
- bridge.py: Extract a clean
omega.bridge(public) with extension hooks. Pro-specific functions (coordination-aware store, protocol-aware query) move toomega_platform.bridge_extwhich monkey-patches or wraps the base bridge. - mcp_server.py: Extract a clean
omega.server.mcp_server(public) that creates a base server with core tools only. Expose aregister_tools(server)hook.omega_platform.server.platform_servercallscreate_server()then registers coordination tools, hook server, embedding daemon, license checks. - handlers.py / tool_schemas.py: Split into core handlers/schemas (public) and coord_handlers/coord_schemas (already separate, pro). The DIVERGED handlers get their pro-specific tool registrations removed — those are added by
omega_platform. - embedding.py: Public version is pure ONNX. Pro extension adds daemon client + shared socket.
- sqlite_store/: Public version is the core storage engine. Pro adds entity isolation, sensitivity classification, and cloud sync hooks via the existing plugin system.
Each DIVERGED file must have its public version tested independently (no pro imports, no try/except ImportError fallbacks to pro modules).
Risk: HIGH — this is the most architecturally complex phase. The entanglement in bridge.py and mcp_server.py is deep. Requires careful design of extension points where none currently exist.
Blocked by: Phase 1 (public repo must be the source of truth first).
Must complete before: Phase 2 (private repo restructure depends on clean separation).
Phase 2: Restructure Private Repo
Goal: Private repo becomes the platform repo with clean boundaries.
- Rename
website/→admin/. - Create
src/omega_platform/namespace. - Move orchestration code:
coordination.py,conflicts.py,coord_reliability.py, hooks, coord_handlers, coord_schemas, cloud, entity, knowledge, oracle, profile, protocol, advisor, license. - Remove memory code from private repo (it's now a pip dependency via omega-memory).
- Update all internal imports to use
omega_platform.*. This includes ~116 cross-references across 30 Python files — many are lazy imports inside function bodies. - Update Vercel project configuration to build from
admin/instead ofwebsite/. - Verify all tests pass with omega-memory as pip dependency (not local source).
Risk: HIGH — large refactor with many import changes, compounded by the DIVERGED file resolution in Phase 1b. Mitigated by comprehensive test suite (77 test files) and the fact that Phase 1b already established the clean extension points.
Blocked by: Phase 1b (DIVERGED files must be resolved first).
Phase 3: Extract Marketing Site
Goal: Marketing pages in their own repo, deployed to omegamax.co.
- Create
singularityjason/omega-websiterepo. - Move from current
website/: landing page, blog, docs, pricing, compare pages, pro marketing page, odyssey, public assets, OG image generation. - Set up Vercel project → deploys to
omegamax.co. - Configure path routing so
/admin/*routes to the admin Vercel project.
Risk: Low — straightforward file move.
Can run in parallel with: Phase 1.
Phase 4: Multi-Tenant Schema Migration
Goal: Implement User → Entity → Project → Data hierarchy.
- Create
user_accountstable. - Evolve
entitiestable — addslug, ensureuser_idis populated. - Evolve
entity_projects→projectstable withentity_idrelationship. - Add
entity_idcolumn (nullable initially) to all data tables. - Backfill: all existing data → Jason's
user_id, a default entity (e.g., "omega"). - Add RLS policies (alongside existing ones).
- Add indexes on all
user_id,entity_id,project_idcolumns. - Update all admin API routes to accept entity context and filter accordingly.
- Set
entity_idto NOT NULL after backfill is verified.
Risk: High — data migration, but mitigated by Jason being the only user (no multi-user coordination needed).
Blocked by: Phase 2 (admin must be restructured first).
Phase 5: Auth & Database Abstraction
Goal: Implement DataStore and AuthProvider interfaces.
- Define
DataStoreinterface inadmin/lib/db/interface.ts. - Implement
SupabaseStore— refactor all API routes from direct Supabase calls to DataStore methods. - Define
AuthProviderinterface inadmin/lib/auth/interface.ts. - Implement
SupabaseAuth— refactor current auth to use the interface. - Add "bring your own Supabase" settings flow (credentials entry, schema version check, migration runner).
Risk: Medium — refactoring ~20 API routes. Mitigated by doing it methodically, route by route.
Blocked by: Phase 4 (multi-tenant schema must exist first).
Phase 6: On-Premise Packaging
Goal: Enterprise customers can deploy the admin on their own infrastructure.
- Create
Dockerfilefor admin (Next.js standalone build). - Create
docker-compose.yml— admin + Postgres + omega_platform. - Implement
PostgresStore(rawpgclient, no Supabase dependency). - Implement
OIDCAuthfor enterprise SSO (Okta, Azure AD). - Implement offline license key validation.
Risk: Low — additive work, no changes to existing functionality.
Blocked by: Phase 5 (abstraction interfaces must exist).
Phase 4b: Onboarding Flow
Goal: New pro users can sign up, create a workspace, connect their CLI, and see data in the dashboard.
- Build signup flow (Supabase OAuth + email/password →
user_accountscreation). - Build workspace creation wizard (name → auto-slug → workspace type → create).
- Build "Connect CLI" page — generate workspace API key, display config snippet.
- Build
omega linkCLI command (or document manual config) for connecting localomega serveto dashboard. - Build getting-started checklist (persistent sidebar widget, 6 items).
- Build BYOS database setup wizard (credentials → connection test → schema check → auto-migration → verification).
Risk: Medium — new UI and API work, but no existing functionality changes.
Blocked by: Phase 4 (workspaces table must exist), Phase 5 partially (auth abstraction helps but not required — can start with Supabase-only).
Rollback Strategy
Since Jason is the only user, rollback is straightforward:
- Pre-migration Supabase snapshot: Before Phase 4 begins, create a Supabase project backup (Settings → Database → Backups). This is the recovery point if schema migration fails.
- Phase 1-3 rollback: These are repo restructuring, not data changes. Revert via git if needed.
- Phase 4 rollback: Restore Supabase from snapshot. Revert admin code to pre-migration branch.
- Pause automated writes during NOT NULL cutover: The sync engine, hooks, and coordination all write data continuously. During the final Phase 4 step (setting
workspace_idNOT NULL), temporarily stopomega serveto prevent rows withoutworkspace_idfrom blocking the migration.
Phase Summary
| Phase | Description | Blocked By | Risk | Ships Independently |
|---|---|---|---|---|
| 1 | Extract memory repo | — | Medium | Yes |
| 1b | Resolve DIVERGED files | Phase 1 | High | Yes (but must precede Phase 2) |
| 2 | Restructure private repo | Phase 1b | High | Yes |
| 3 | Extract marketing site | — | Low | Yes |
| 4 | Multi-tenant schema | Phase 2 | High | Yes |
| 4b | Onboarding flow | Phase 4 | Medium | Yes |
| 5 | Auth & DB abstraction | Phase 4 | Medium | Yes |
| 6 | On-prem packaging | Phase 5 | Low | Yes |
Phases 1 and 3 can run in parallel. Phases 4b and 5 can run in parallel. Critical path: 1 → 1b → 2 → 4 → 5 → 6.
10. Dev Workflow
10.1 Local Development Setup
# Makefile (in singularityjason/omega)
setup:
# Clone memory repo as sibling (if not already)
[ -d ../omega-memory ] || git clone git@github.com:omega-memory/omega-memory.git ../omega-memory
# Install memory as editable dependency
pip install -e "../omega-memory[full]"
# Install platform
pip install -e ".[full]"
# Install admin dependencies
cd admin && npm install
test:
cd ../omega-memory && pytest -x
pytest -x
cd admin && npm test
test-memory:
cd ../omega-memory && pytest -x
test-platform:
pytest -x
test-admin:
cd admin && npm test
10.2 AI Agent Scoping
Each repo gets .claude/CLAUDE.md rules that constrain agent behavior:
omega-memory/.claude/CLAUDE.md:
This is the standalone OMEGA memory engine (Apache-2.0, public).
There is NO coordination, orchestration, hooks, admin dashboard,
or pro features in this repo.
Do NOT reference omega_platform, coord_*, or commercial features.
This code must work independently with zero cloud dependencies.
omega/.claude/CLAUDE.md:
This is the OMEGA platform (commercial, private).
Memory engine is a SEPARATE repo (omega-memory) — a pip dependency.
Do NOT modify memory code here.
Python orchestration code: src/omega_platform/
Admin dashboard: admin/
Marketing site is in a SEPARATE repo (omega-website).
omega-website/.claude/CLAUDE.md:
This is the OMEGA marketing website (omegamax.co).
Landing pages, blog, docs, pricing, comparison pages.
No admin dashboard code. No Python code. No pro features.
The admin dashboard is in a SEPARATE repo.
10.3 CI Pipelines
omega-memory (GitHub Actions):
pytest -x— memory tests only- Build wheel
- On tag: publish to PyPI
omega (GitHub Actions):
- Install
omega-memory(released version from PyPI) pytest -x— platform testscd admin && npm test— admin tests- On main push: deploy admin to Vercel
- Nightly: test against
omega-memory@main(catch breaking changes early)
omega-website (GitHub Actions):
npm test- On main push: deploy to Vercel
10.4 Release Flow
Memory change:
- PR to
omega-memory→ merge → tagvX.Y.Z→ PyPI publishes automatically - PR to
omega→ bumpomega-memoryversion inpyproject.toml→ merge
Platform/admin change:
- PR to
omega→ merge → admin auto-deploys to Vercel
Marketing change:
- PR to
omega-website→ merge → auto-deploys toomegamax.co
11. Open Questions
- Naming: Is
omega_platformthe right Python package name, or should it beomega_pro,omega_orchestrator, or something else? - Michael's PRs: His Astro dashboard PR and cloud backend abstraction PR contain useful patterns (especially the OIDC auth abstraction and pluggable database backends). Should these be cherry-picked into the new architecture, or treated as reference implementations?
- Sync engine scope: The current cloud sync pushes all data to Supabase. In the new model, it needs to push with
workspace_idandproject_idtags. Should the sync engine move toomega_platform(private) or stay inomega-memory(public) as a generic capability? - Marketing site deployment: Should
omegamax.co/adminbe a Vercel rewrite to a separate Vercel project, or should both apps deploy to the same Vercel project using Vercel Services? Note: cross-project path routing on the same domain may require specific Vercel plan features — verify before committing. - Timeline: What's the target timeline for Phase 1-4? Phase 5-6 can wait for enterprise demand.
- BYOS security model: The "Bring Your Own Supabase" flow stores users' service role keys encrypted on the managed platform. A service role key bypasses RLS entirely. Need to define: encryption-at-rest mechanism, key management (who holds the master key?), and whether a less-privileged credential model (anon key + RLS) is sufficient for BYOS reads.
- Extension point design: Phase 1b (DIVERGED file resolution) requires designing extension points in
bridge.pyandmcp_server.pywhere none currently exist. This is the highest-risk technical design work and should be prototyped before committing to the full split.
12. References
- Sentry vs Getsentry Architecture
- GitLab: Single Codebase for CE and EE
- Dagster Open Core Business Model
- PostHog ee/ Directory
- Cal.com Monorepo Structure
- Supabase RLS Performance Best Practices
- Multi-Tenant Architecture Patterns (Bytebase)
- Meeting notes: Jason Sosa & Michael Anton Fischer, March 18, 2026