Skip to main content
2026-03-19

OMEGA Platform Architecture Redesign

Date: 2026-03-19 Status: Draft Author: Jason Sosa + Claude Stakeholders: Jason Sosa, Michael Anton Fischer

1. Problem Statement

The OMEGA project has grown from a standalone memory engine into a multi-layered platform encompassing memory, multi-agent orchestration, an admin dashboard, and a marketing site. The current architecture has several problems:

  1. AI agent confusion: Two repositories (private omega, public omega-memory) contain overlapping code. Agents hallucinate pro features when working on memory-only code and stumble when they encounter the dual-repo structure.
  2. No true multi-tenancy: The admin dashboard was built for a single user. The "Jimmy incident" (pre-March 2026) exposed that new users could see the owner's data via OAuth. Patches added user_id filtering and RLS, but there is no organizational hierarchy — no concept of entities-as-tenants or project-level data isolation.
  3. Supabase lock-in: Auth, database queries, and RLS are hardcoded to Supabase. This blocks on-premise deployment for enterprise customers and limits self-hosting options.
  4. Unclear boundaries: Memory, orchestration, admin dashboard, and marketing site all live in one private repo. Contributors to the open-source memory engine have no clean entry point.

Background

These issues were discussed in a March 18, 2026 meeting between Jason Sosa and Michael Anton Fischer. Key points from that meeting:

  • Michael recommended making memory a submodule of a separate orchestration system to cleanly separate the two halves of the project.
  • Multi-tenancy must be enforced at the core code level, not tacked on as an afterthought.
  • The current codebase is completely tied to Supabase and lacks actual auth enforcement in the core.
  • The project needs to support multi-tenant isolation where each tenant also needs internal multi-tenancy (per project, per entity).
  • Michael built a proof-of-concept Astro dashboard with multi-provider OIDC auth and a cloud backend abstraction layer (separate PR).

Licensing Note

The March 18 meeting discussed Polyform Small Business license. This was subsequently ruled out — it violates the Kokyō Keishō Zaidan Stichting Charter Section 11 (core code must remain open source). The architecture must support Apache-2.0 for memory and a separate commercial license for the orchestration/platform layer. This further reinforces the need for clean repo separation.

2. Goals

  1. Eliminate AI agent confusion by separating memory, orchestration, and marketing into distinct repositories with clear boundaries and scoped .claude/ rules.
  2. Implement multi-tenant data isolation with a User → Entity → Project → Data hierarchy, enforced at the database level via RLS and at the application level via a DataStore interface.
  3. Abstract database and auth behind provider-agnostic interfaces to unblock on-premise deployment and "bring your own Supabase" self-hosting.
  4. Protect pro features by ensuring orchestration source code never appears in public repositories. Only the memory engine is open source.
  5. Enable the admin dashboard as a SaaS product — an "agent intelligence OS" where solo founders, agencies, and enterprises manage agents, jobs, projects, entities, and memories across multiple businesses.

Non-Goals

  • Migrating away from Supabase (it remains the primary backend; we add abstraction for future portability).
  • Building the on-premise Docker deployment (Phase 6; designed now, built later).
  • Implementing team/collaboration features within entities (future; the schema supports it).
  • Changing the Python memory engine's public API (omega-memory's interface stays stable).
  • Migrating the marketing site framework (stays Next.js).

3. Architecture Decision: Sentry Model

Research Summary

We evaluated repository patterns from 10 major open-source projects with commercial layers:

PatternUsed ByDescription
ee/ directory in same repoGitLab, PostHog, Cal.com, n8nEnterprise code visible to all contributors
Private repo wraps publicSentry, GrafanaPrivate repo imports public as dependency, extends it
Fully separate reposTemporal, Dagster, AirbyteCloud is architecturally distinct product
Monorepo, env-var gatedSupabaseEverything in one repo, features toggled at runtime

No project uses git submodules for this purpose.

Decision: Private repo wraps public (Sentry Model)

Rationale:

  1. Pro code is invisible to contributors — the public repo contains only memory.
  2. Memory can evolve independently with community contributions.
  3. The private repo extends memory via pip dependency, adding orchestration and the admin dashboard.
  4. On-premise customers receive the private repo's Docker image — no memory internals exposed.
  5. Sentry validates this pattern at scale: getsentry/sentry (open, Python+TS) is extended by getsentry/getsentry (private, SaaS layer). Extension mechanisms include Django signals, swappable backends, and feature flags.

Why not ee/ directory (GitLab/PostHog model): Jason explicitly requires that pro code be invisible to contributors. The ee/ pattern makes enterprise code visible to anyone who clones the repo.

Why not fully separate repos (Temporal/Dagster model): The orchestration layer is tightly coupled to memory — it extends it, not replaces it. The overhead of fully decoupled repos isn't justified.

4. Repository Structure

Three Repositories

RepositoryVisibilityPurposeDeploys To
omega-memory/omega-memoryPublicStandalone memory enginePyPI (omega-memory)
singularityjason/omegaPrivateOrchestration + Admin dashboardVercel (omegamax.co/admin) + Docker (on-prem)
singularityjason/omega-websitePrivateMarketing site, blog, docsVercel (omegamax.co)

4.1 omega-memory (Public)

omega-memory/omega-memory
├── src/omega/
│     ├── __init__.py
│     ├── sqlite_store/          ← storage engine
│     ├── bridge.py              ← high-level memory API
│     ├── embedding.py           ← ONNX embeddings
│     ├── schema.py              ← data models
│     ├── types.py               ← type definitions
│     ├── exceptions.py          ← error types
│     ├── json_compat.py         ← JSON utilities
│     ├── preferences.py         ← user preferences
│     ├── plugins.py             ← plugin interface
│     ├── crypto.py              ← encryption
│     ├── cli.py                 ← CLI interface
│     └── server/
│           ├── mcp_server.py    ← basic MCP server (memory tools only)
│           ├── handlers.py      ← core tool handlers
│           └── tool_schemas.py  ← core tool schemas
├── tests/                       ← memory-only tests
├── hooks/                       ← core hooks (subset of current hooks)
├── docs/
├── pyproject.toml               ← publishable as omega-memory on PyPI
├── LICENSE                      ← Apache-2.0
└── README.md

Installs as: pip install omega-memory Import: from omega.bridge import ..., from omega.sqlite_store import ...

4.2 omega (Private — Platform)

singularityjason/omega
├── src/omega_platform/
│     ├── orchestrator/
│     │     ├── coordination.py      ← multi-agent coordination
│     │     ├── conflicts.py         ← conflict resolution
│     │     ├── coord_reliability.py ← DLQ, circuit breaker
│     │     └── sandbox/             ← agent tool sandboxing (new)
│     ├── hooks/
│     │     ├── fast_hook.py         ← full dispatcher with coordination
│     │     ├── pre_file_guard.py
│     │     ├── pre_commit_guard.py
│     │     ├── coord_session_start.py
│     │     ├── coord_session_stop.py
│     │     ├── coord_heartbeat.py
│     │     ├── auto_capture.py
│     │     └── ...                  ← remaining hooks
│     ├── server/
│     │     ├── coord_handlers.py    ← coordination tool handlers
│     │     ├── coord_schemas.py     ← coordination tool schemas
│     │     ├── hook_server/         ← hook server implementation
│     │     ├── auth.py              ← server auth
│     │     └── jit_proxy.py
│     ├── cloud/                     ← Supabase sync engine
│     ├── entity/                    ← entity management
│     ├── knowledge/                 ← knowledge base
│     ├── oracle/                    ← oracle/router
│     ├── profile/                   ← user profiles
│     ├── protocol.py                ← OMEGA protocol
│     ├── advisor.py                 ← advisory engine
│     ├── pattern_learner.py
│     ├── thompson.py
│     └── license.py                 ← license validation
│
├── admin/                           ← Next.js dashboard (THE product)
│     ├── app/
│     │     ├── admin/               ← multi-tenant admin UI
│     │     │     ├── [entitySlug]/  ← entity-scoped views
│     │     │     └── settings/      ← account settings
│     │     ├── api/
│     │     │     └── admin/         ← admin API routes
│     │     └── login/               ← admin auth
│     ├── lib/
│     │     ├── db/
│     │     │     ├── interface.ts   ← DataStore interface
│     │     │     ├── supabase.ts    ← Supabase adapter
│     │     │     ├── postgres.ts    ← Raw Postgres (on-prem, future)
│     │     │     └── sqlite.ts      ← SQLite (air-gapped, future)
│     │     └── auth/
│     │           ├── interface.ts   ← AuthProvider interface
│     │           ├── supabase.ts    ← Supabase OAuth (current)
│     │           └── oidc.ts        ← Generic OIDC (enterprise, future)
│     ├── components/                ← admin UI components
│     ├── hooks/                     ← React hooks
│     ├── Dockerfile                 ← on-prem standalone build
│     └── package.json
│
├── supabase/
│     └── migrations/                ← all Supabase migrations
│
├── docker-compose.yml               ← on-prem: admin + postgres + omega
├── pyproject.toml                   ← depends on omega-memory
├── Makefile                         ← dev setup automation
└── LICENSE                          ← commercial

Python imports:

from omega.bridge import store_memory           # from omega-memory (pip)
from omega_platform.orchestrator import ...     # from this repo
from omega_platform.server.coord_handlers import ...

4.3 omega-website (Private — Marketing)

singularityjason/omega-website
├── app/
│     ├── page.tsx               ← landing page
│     ├── blog/                  ← blog
│     ├── docs/                  ← documentation
│     ├── pricing/               ← pricing page
│     ├── compare/               ← comparison pages (vs Mem0, etc.)
│     ├── pro/                   ← pro feature marketing
│     └── odyssey/               ← odyssey page
├── components/
├── lib/
├── public/
├── package.json
├── next.config.ts
└── vercel.json                  ← deploys to omegamax.co

Source code is private. The website is publicly accessible via browser at omegamax.co.

4.4 Domain Routing

omegamax.co/              → omega-website (Vercel project)
omegamax.co/blog/*        → omega-website
omegamax.co/docs/*        → omega-website
omegamax.co/pricing       → omega-website
omegamax.co/admin/*       → omega admin (Vercel project)
omegamax.co/api/admin/*   → omega admin (Vercel project)

Implemented via Vercel path-based routing across two projects on the same domain, or via rewrites in the marketing site's next.config.ts.

5. Multi-Tenant Data Architecture

5.1 Tenant Hierarchy

User (pro subscriber — founder, agency owner, consultant)
  ├── Entity A (a business, client, or account they manage)
  │     ├── Project 1 (a product, initiative, or campaign)
  │     │     ├── Agent Sessions
  │     │     ├── Memories
  │     │     ├── Jobs
  │     │     └── Coordination data
  │     └── Project 2
  │           └── ...
  ├── Entity B (another business)
  │     └── Project 3
  └── Entity C (a consulting client)
        └── ...

Each pro user manages multiple entities. Each entity contains multiple projects. All data (memories, sessions, tasks, jobs) belongs to exactly one entity and optionally one project within that entity.

5.2 Database Schema

Naming Collision: entities vs workspaces

The existing entities table (migration 20260225100000) stores tracked people and companies (entity graph data). The multi-tenant hierarchy needs a table for businesses/accounts a user manages. To avoid a semantic collision, the tenant-level table is named workspaces:

  • workspaces = businesses/clients the user manages (tenant boundary)
  • entities = tracked people, companies, organizations within a workspace (existing table, kept as-is)

Existing Schema Issues to Address

  1. Entity ID type: The existing entities table uses id TEXT PRIMARY KEY. All foreign keys referencing entity_id are TEXT. This spec does NOT propose migrating to UUID — the existing TEXT IDs remain. The new workspaces table uses UUID. The workspace_id column on data tables is UUID.
  2. Missing user_id on coord tables: The coord_sessions, coord_tasks, coord_messages, coord_handoffs, coord_intents, coord_decisions, coord_git_events, coord_metrics, coord_file_claims, and coord_file_reads tables do NOT currently have user_id columns (despite earlier multi-tenant work that added user_id to memories, tweets, and other tables). Both user_id and workspace_id must be added to all coord tables.
  3. Existing projects and entity_projects tables: Migration 20260302110000 created a projects table. Migration 20260318010000 created entity_projects. These must be reconciled — entity_projects evolves to become workspace-scoped projects, and the earlier projects table is deprecated or merged.

New tables:

-- User's subscription account
CREATE TABLE user_accounts (
  user_id UUID PRIMARY KEY REFERENCES auth.users,
  plan TEXT NOT NULL DEFAULT 'free',       -- free/pro/enterprise
  license_key TEXT,
  settings JSONB DEFAULT '{}',
  created_at TIMESTAMPTZ DEFAULT now()
);

-- Workspaces the user manages (tenant boundary)
CREATE TABLE workspaces (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID NOT NULL REFERENCES auth.users,
  name TEXT NOT NULL,
  workspace_type TEXT,                     -- business/client/personal
  slug TEXT NOT NULL,                      -- URL segment: /admin/acme-corp
  settings JSONB DEFAULT '{}',
  created_at TIMESTAMPTZ DEFAULT now(),
  updated_at TIMESTAMPTZ DEFAULT now(),
  UNIQUE(user_id, slug)
);

-- Projects within a workspace
CREATE TABLE workspace_projects (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  workspace_id UUID NOT NULL REFERENCES workspaces ON DELETE CASCADE,
  user_id UUID NOT NULL REFERENCES auth.users,  -- denormalized for RLS
  name TEXT NOT NULL,
  description TEXT,
  status TEXT NOT NULL DEFAULT 'active',
  settings JSONB DEFAULT '{}',
  created_at TIMESTAMPTZ DEFAULT now(),
  updated_at TIMESTAMPTZ DEFAULT now()
);

Modified tables — all data tables receive workspace_id and user_id (denormalized):

-- Memories (modified — user_id exists on some rows, needs backfill)
ALTER TABLE memories
  ADD COLUMN workspace_id UUID REFERENCES workspaces;
  -- user_id already exists (added in migration 20260302100000, nullable, needs backfill)

-- Coordination tables — BOTH user_id AND workspace_id must be added
-- (user_id does NOT currently exist on coord tables)
ALTER TABLE coord_sessions
  ADD COLUMN user_id UUID REFERENCES auth.users,
  ADD COLUMN workspace_id UUID REFERENCES workspaces;

ALTER TABLE coord_tasks
  ADD COLUMN user_id UUID REFERENCES auth.users,
  ADD COLUMN workspace_id UUID REFERENCES workspaces;

-- Same pattern for ALL coord tables:
-- coord_messages, coord_handoffs, coord_intents, coord_decisions,
-- coord_git_events, coord_metrics, coord_file_claims, coord_file_reads

-- Other data tables (user_id exists, add workspace_id):
-- tweets, engagement_suggestions, pending_events, job_approvals

-- Platform-global tables (NO workspace_id needed):
-- allowed_users, user_identities, licenses, pro_releases, download_events,
-- website_llm_usage, notification_settings

Indexes (critical for RLS performance):

CREATE INDEX idx_entities_user ON entities(user_id);
CREATE INDEX idx_projects_entity ON projects(entity_id);
CREATE INDEX idx_projects_user ON projects(user_id);
CREATE INDEX idx_memories_user ON memories(user_id);
CREATE INDEX idx_memories_entity ON memories(entity_id);
CREATE INDEX idx_memories_project ON memories(project_id);
CREATE INDEX idx_coord_sessions_user ON coord_sessions(user_id);
CREATE INDEX idx_coord_sessions_entity ON coord_sessions(entity_id);
-- ... same pattern for all data tables

5.3 Row Level Security

-- User isolation on workspaces
CREATE POLICY user_isolation ON workspaces
  USING (user_id = auth.uid())
  WITH CHECK (user_id = auth.uid());

-- User isolation on workspace projects (via denormalized user_id)
CREATE POLICY user_isolation ON workspace_projects
  USING (user_id = auth.uid())
  WITH CHECK (user_id = auth.uid());

-- User isolation on all data tables (same pattern)
CREATE POLICY user_isolation ON memories
  USING (user_id = auth.uid())
  WITH CHECK (user_id = auth.uid());

-- Repeat for every data table: coord_sessions, coord_tasks,
-- coord_messages, entities (tracked), etc.

Why user_id = auth.uid() and not a join-based policy:

  • Single column check is the fastest possible RLS evaluation.
  • No joins to workspaces or membership tables — the denormalized user_id makes this O(1).
  • Workspace and project scoping happens at the application layer (the DataStore filters by workspace_id and project_id), with RLS as the safety net ensuring a user can never access another user's data even if application code has a bug.
  • When team features are added later, the RLS policy extends to include a workspace_collaborators table — but the base user_id check remains as defense-in-depth.

5.4 DataStore Interface

// admin/lib/db/interface.ts

export interface MemoryFilters {
  workspaceId: string;
  projectId?: string;
  eventType?: string;
  memoryType?: string;
  search?: string;
  limit?: number;
  offset?: number;
}

export interface TrackedEntityFilters {
  entityType?: string;
  status?: string;
}

export interface SessionFilters {
  workspaceId: string;
  projectId?: string;
  status?: string;
}

export interface DataStore {
  // User account
  getUserAccount(userId: string): Promise<UserAccount | null>;

  // Workspaces (businesses/clients the user manages)
  getWorkspaces(userId: string): Promise<Workspace[]>;
  getWorkspace(userId: string, workspaceSlug: string): Promise<Workspace | null>;
  createWorkspace(userId: string, data: CreateWorkspaceInput): Promise<Workspace>;
  updateWorkspace(userId: string, workspaceId: string, data: UpdateWorkspaceInput): Promise<Workspace>;

  // Projects within a workspace
  getProjects(userId: string, workspaceId: string): Promise<Project[]>;
  getProject(userId: string, projectId: string): Promise<Project | null>;
  createProject(userId: string, workspaceId: string, data: CreateProjectInput): Promise<Project>;

  // Memories
  getMemories(userId: string, filters: MemoryFilters): Promise<Memory[]>;
  getMemoryGraph(userId: string, workspaceId: string, projectId?: string): Promise<GraphData>;

  // Tracked entities (people/companies within a workspace)
  getTrackedEntities(userId: string, workspaceId: string, filters?: TrackedEntityFilters): Promise<TrackedEntity[]>;
  getEntityGraph(userId: string, workspaceId: string): Promise<GraphData>;

  // Coordination
  getSessions(userId: string, filters: SessionFilters): Promise<Session[]>;
  getTasks(userId: string, workspaceId: string): Promise<Task[]>;
  getMessages(userId: string, sessionId: string): Promise<Message[]>;
  getHandoffs(userId: string, workspaceId: string): Promise<Handoff[]>;

  // Jobs
  getJobs(userId: string, workspaceId: string): Promise<Job[]>;

  // Dashboard metrics
  getDashboardMetrics(userId: string, workspaceId: string): Promise<DashboardMetrics>;
}

Implementations:

  • SupabaseStore — uses Supabase client with RLS (ships first)
  • PostgresStore — raw pg client for on-premise Postgres (Phase 6)
  • SQLiteStore — for air-gapped enterprise deployments (Phase 6)

5.5 AuthProvider Interface

// admin/lib/auth/interface.ts

export interface User {
  id: string;
  email: string;
  name?: string;
  avatarUrl?: string;
}

export interface AuthSession {
  user: User;
  accessToken: string;
  expiresAt: Date;
}

export interface AuthProvider {
  getCurrentUser(request: Request): Promise<User | null>;
  validateSession(request: Request): Promise<AuthSession | null>;
  getUserAccount(userId: string): Promise<UserAccount | null>;
  signIn(credentials: unknown): Promise<AuthSession>;
  signOut(request: Request): Promise<void>;
}

Implementations:

  • SupabaseAuth — current Supabase OAuth + password/passkey (ships first)
  • OIDCAuth — generic OpenID Connect for enterprise SSO (Phase 6)

5.6 "Bring Your Own Supabase" Flow

For self-hosted pro users who want their data on their own Supabase instance:

  1. User navigates to /admin/settings → "Database" section.
  2. Enters their Supabase project URL, anon key, and service role key.
  3. Credentials are stored encrypted in user_accounts.settings on the managed platform database.
  4. On first connect, the admin runs a schema version check against the user's Supabase. If migrations are needed, it provides a one-click migration runner or downloadable SQL.
  5. All subsequent data queries for that user create a SupabaseStore with the user's credentials, pointing to the user's database.
  6. The managed platform database stores only: user_accounts, entities (slugs/metadata), and encrypted credentials. All memories, sessions, and coordination data live on the user's Supabase.

6. Admin UI Structure

6.1 URL Hierarchy

/admin                                  → redirect to default workspace or workspace picker
/admin/settings                         → account settings (subscription, license, database config)
/admin/:workspaceSlug                   → workspace dashboard (overview metrics, recent activity)
/admin/:workspaceSlug/projects          → project list for this workspace
/admin/:workspaceSlug/projects/:slug    → project detail (agents, memories, jobs)
/admin/:workspaceSlug/agents            → active agent sessions across all projects
/admin/:workspaceSlug/memories          → memory graph for this workspace
/admin/:workspaceSlug/entities          → tracked entities (people, companies) within this workspace
/admin/:workspaceSlug/jobs              → job queue and audit log
/admin/:workspaceSlug/insights          → AI-generated insights for this workspace
/admin/:workspaceSlug/settings          → workspace-level settings

6.2 Navigation

Sidebar:
  [Workspace Picker Dropdown]   ← switch between workspaces (businesses/clients)
  ─────────────────────
  Dashboard                     ← workspace overview
  Projects                      ← projects within workspace
  Agents                        ← coordination/sessions
  Memories                      ← memory graph
  Entities                      ← tracked people/companies
  Jobs                          ← job queue
  Insights                      ← AI insights
  ─────────────────────
  Settings (workspace)
  Settings (account)            ← subscription, DB config

6.3 Workspace Picker

When a user has multiple workspaces, the sidebar shows a dropdown at the top. Selecting a workspace scopes all views to that workspace's data. The current workspace slug is in the URL, so deep links work and browser history is workspace-aware.

7. New User Onboarding

7.1 Research-Backed Design Principles

Based on analysis of Linear, Vercel, Supabase, Sentry, Stripe, Datadog, Fly.io, and others:

  1. Progressive disclosure: Collect 2-3 fields at signup, defer everything else. Never gate first-value behind email verification or profile completion.
  2. Smart defaults: Derive workspace name from context (email domain, git remote). Auto-generate slug. Reduce decisions.
  3. First-value moment: Generate a verification step that proves the product works (Sentry's "Throw Sample Error" pattern). For OMEGA: first memory stored + visible in dashboard.
  4. Self-hosted divergence point: Keep everything after admin setup identical between managed and self-hosted. Only the endpoint URL changes.
  5. CLI-to-dashboard bridge: Use an API key or token as the connection mechanism (Datadog pattern), with browser OAuth for interactive setup (Vercel pattern).

7.2 Onboarding Flow

MANAGED PATH:
  Signup (OAuth/email)
    → Create first workspace (name only, slug auto-generated)
    → Choose: "Connect CLI" or "Explore dashboard"
    → If CLI: copy config snippet with API key + workspace_id
    → If dashboard: show demo data / getting-started checklist
    → First value: memory appears in dashboard

SELF-HOSTED PATH (BYOS):
  Signup (OAuth/email)
    → Create first workspace
    → "Connect your database" wizard
      → Enter Supabase URL + keys (or Postgres connection string)
      → Auto-run schema migrations against their instance
      → Verify connection
    → Same CLI/dashboard flow as managed

ON-PREM PATH:
  Docker deployment (admin creates instance)
    → First user becomes admin (like GitLab CE)
    → Configure auth (OIDC provider URL, client ID)
    → Create first workspace
    → Same CLI/dashboard flow

7.3 Signup → First Workspace (Web)

Step 1: Account creation (1 screen)

  • Sign up via Supabase OAuth (Google, GitHub) or email/password
  • Creates user_accounts row with plan: 'free'
  • No email verification gate — user proceeds immediately (verification can happen async)

Step 2: Create first workspace (1 screen, 1 required field)

  • Field: Workspace name (e.g., "Acme Corp", "My Agency", "Personal")
  • Auto-generated: slug from name (editable)
  • Optional: workspace type (business / client / personal) — defaults to "business"
  • Optional: icon/avatar — small personalization step increases commitment (the IKEA effect)
  • Creates workspaces row
  • Skippable for managed-only users? No — workspace is required to scope all data

Step 3: Choose path (1 screen, 2 options)

┌─────────────────────┐  ┌─────────────────────┐
│  Connect your CLI    │  │  Explore Dashboard   │
│                      │  │                      │
│  Already running     │  │  See what OMEGA can  │
│  omega serve?        │  │  do with sample data │
│  Connect it now.     │  │                      │
└─────────────────────┘  └─────────────────────┘

7.4 CLI Connection (the bridge)

Pattern: API key + config file (Datadog/Sentry hybrid)

When the user chooses "Connect your CLI", the dashboard:

  1. Generates a workspace API key (stored in workspaces.settings.api_keys[])
  2. Displays a ready-to-copy config snippet:
# Add to ~/.omega/config.yaml
user_id: "abc-123-def"
workspace: "acme-corp"
workspace_key: "omk_live_xxxxxxxxxxxx"
sync:
  enabled: true
  url: "https://omegamax.co/api/sync"   # or their own Supabase URL for BYOS
  1. User pastes into their config, restarts omega serve
  2. CLI connects, syncs first batch of local memories to dashboard
  3. Dashboard shows "Connection successful! X memories synced" — the first-value moment

For CI/headless environments: OMEGA_WORKSPACE_KEY environment variable (no browser needed).

Alternative: omega link command (Vercel pattern, future enhancement):

omega link
# Opens browser → user authenticates → selects workspace → writes config automatically

7.5 BYOS Database Setup (self-hosted Supabase)

When a user chooses "Connect your database" in the BYOS flow:

  1. Enter credentials: Supabase URL + anon key + service role key (or Postgres connection string for on-prem)
  2. Connection test: Dashboard pings the database to verify connectivity
  3. Schema check: Compares the user's database schema version against the required version
  4. Auto-migration: If schema is behind, show the required migrations with a "Run migrations" button (or provide downloadable SQL for review-first users)
  5. Verification: Insert a test row, read it back, delete it — confirms RLS and write access work
  6. Store credentials: Encrypted in user_accounts.settings on the managed platform (for routing future requests)

Error handling: If credentials are wrong, connection fails, or migrations can't run — show clear error messages with next steps, never silently fail.

7.6 Getting-Started Checklist (post-onboarding)

Persistent in the dashboard sidebar until dismissed (Stripe/Sentry pattern):

Getting Started                    [3/6 complete]
✅ Create your workspace
✅ Connect your CLI
✅ Store your first memory
☐ Create a project
☐ Set up your first entity
☐ Explore the memory graph

Each checklist item links to the relevant dashboard section or documentation. Completing all items triggers a "You're all set!" dismissal with an optional tour of advanced features.

7.7 Demo/Sample Data (optional, future)

For users who choose "Explore Dashboard" before connecting their CLI, pre-populate the workspace with sample data:

  • 50 sample memories across 3 entity types
  • 5 sample agent sessions with coordination events
  • 2 sample projects with memories scoped to each
  • A pre-built memory graph visualization

Sample data is clearly labeled as demo content and can be deleted with one click when the user is ready to use real data. This follows Linear's "interactive demo workspace" pattern.

8. Python Integration

7.1 omega serve Configuration

When a user runs omega serve locally, they configure their workspace and project context:

# ~/.omega/config.yaml
user_id: "uuid-from-auth"
default_workspace: "acme-corp"
default_project: "product-launch"
sync:
  enabled: true
  backend: "supabase"           # or "postgres", "sqlite"
  supabase_url: "https://xxx.supabase.co"
  supabase_key: "..."

All memories, sessions, and coordination data created by the local MCP server are tagged with workspace_id and project_id from this config. The sync engine pushes to the configured backend with these tags intact.

7.2 omega_platform Extending omega

The private omega_platform package extends the public omega package:

# omega_platform extends omega's MCP server with coordination tools
from omega.server.mcp_server import create_server  # from omega-memory
from omega_platform.server.coord_handlers import register_coord_tools

server = create_server()
register_coord_tools(server)  # adds coordination, hooks, etc.

The extension mechanism follows Sentry's pattern: the public package defines clean extension points (tool registration, hook system, plugin interface), and the private package plugs into them.

9. Migration Strategy

Current State

  • Jason is the only admin user. Jimmy and Michael are not using the dashboard.
  • All existing data belongs to Jason's account.
  • The public repo (omega-memory) exists but is a manual sync target, ~40 commits behind.
  • The marketing site and admin dashboard are co-located in ~/Projects/omega/website/.

Phase 1: Extract Memory into Standalone Repo

Goal: omega-memory/omega-memory becomes the source of truth for memory code.

  1. Clean up omega-memory repo — ensure it builds and tests independently with no pro code.
  2. Verify all memory-only tests pass standalone.
  3. Publish updated version to PyPI.
  4. Delete sync-manifest.yaml and scripts/sync-to-public.py from private repo.
  5. Add omega-memory as pip dependency in private repo's pyproject.toml.

Risk: Medium — import paths may break. Mitigated by the fact that omega-memory already publishes as omega on PyPI.

Can run in parallel with: Phase 3.

Phase 1b: Resolve DIVERGED Files

Goal: Split the 8 DIVERGED files into clean public base + private extension.

The following files exist in both repos with different content (pro code interleaved with core code):

  • bridge.py — 5+ lazy imports from omega.coordination woven into function bodies
  • mcp_server.py — 23+ imports from pro modules (license, coordination, hook_server, embedding_daemon, pid_registry)
  • handlers.py — core tool handlers + pro tool registration
  • tool_schemas.py — core schemas + pro schemas
  • embedding.py — core ONNX engine + pro embedding daemon client
  • plugins.py, types.py, __init__.py — minor divergences
  • sqlite_store/ — package with core storage + pro extensions

Resolution strategy per file:

  1. bridge.py: Extract a clean omega.bridge (public) with extension hooks. Pro-specific functions (coordination-aware store, protocol-aware query) move to omega_platform.bridge_ext which monkey-patches or wraps the base bridge.
  2. mcp_server.py: Extract a clean omega.server.mcp_server (public) that creates a base server with core tools only. Expose a register_tools(server) hook. omega_platform.server.platform_server calls create_server() then registers coordination tools, hook server, embedding daemon, license checks.
  3. handlers.py / tool_schemas.py: Split into core handlers/schemas (public) and coord_handlers/coord_schemas (already separate, pro). The DIVERGED handlers get their pro-specific tool registrations removed — those are added by omega_platform.
  4. embedding.py: Public version is pure ONNX. Pro extension adds daemon client + shared socket.
  5. sqlite_store/: Public version is the core storage engine. Pro adds entity isolation, sensitivity classification, and cloud sync hooks via the existing plugin system.

Each DIVERGED file must have its public version tested independently (no pro imports, no try/except ImportError fallbacks to pro modules).

Risk: HIGH — this is the most architecturally complex phase. The entanglement in bridge.py and mcp_server.py is deep. Requires careful design of extension points where none currently exist.

Blocked by: Phase 1 (public repo must be the source of truth first).

Must complete before: Phase 2 (private repo restructure depends on clean separation).

Phase 2: Restructure Private Repo

Goal: Private repo becomes the platform repo with clean boundaries.

  1. Rename website/admin/.
  2. Create src/omega_platform/ namespace.
  3. Move orchestration code: coordination.py, conflicts.py, coord_reliability.py, hooks, coord_handlers, coord_schemas, cloud, entity, knowledge, oracle, profile, protocol, advisor, license.
  4. Remove memory code from private repo (it's now a pip dependency via omega-memory).
  5. Update all internal imports to use omega_platform.*. This includes ~116 cross-references across 30 Python files — many are lazy imports inside function bodies.
  6. Update Vercel project configuration to build from admin/ instead of website/.
  7. Verify all tests pass with omega-memory as pip dependency (not local source).

Risk: HIGH — large refactor with many import changes, compounded by the DIVERGED file resolution in Phase 1b. Mitigated by comprehensive test suite (77 test files) and the fact that Phase 1b already established the clean extension points.

Blocked by: Phase 1b (DIVERGED files must be resolved first).

Phase 3: Extract Marketing Site

Goal: Marketing pages in their own repo, deployed to omegamax.co.

  1. Create singularityjason/omega-website repo.
  2. Move from current website/: landing page, blog, docs, pricing, compare pages, pro marketing page, odyssey, public assets, OG image generation.
  3. Set up Vercel project → deploys to omegamax.co.
  4. Configure path routing so /admin/* routes to the admin Vercel project.

Risk: Low — straightforward file move.

Can run in parallel with: Phase 1.

Phase 4: Multi-Tenant Schema Migration

Goal: Implement User → Entity → Project → Data hierarchy.

  1. Create user_accounts table.
  2. Evolve entities table — add slug, ensure user_id is populated.
  3. Evolve entity_projectsprojects table with entity_id relationship.
  4. Add entity_id column (nullable initially) to all data tables.
  5. Backfill: all existing data → Jason's user_id, a default entity (e.g., "omega").
  6. Add RLS policies (alongside existing ones).
  7. Add indexes on all user_id, entity_id, project_id columns.
  8. Update all admin API routes to accept entity context and filter accordingly.
  9. Set entity_id to NOT NULL after backfill is verified.

Risk: High — data migration, but mitigated by Jason being the only user (no multi-user coordination needed).

Blocked by: Phase 2 (admin must be restructured first).

Phase 5: Auth & Database Abstraction

Goal: Implement DataStore and AuthProvider interfaces.

  1. Define DataStore interface in admin/lib/db/interface.ts.
  2. Implement SupabaseStore — refactor all API routes from direct Supabase calls to DataStore methods.
  3. Define AuthProvider interface in admin/lib/auth/interface.ts.
  4. Implement SupabaseAuth — refactor current auth to use the interface.
  5. Add "bring your own Supabase" settings flow (credentials entry, schema version check, migration runner).

Risk: Medium — refactoring ~20 API routes. Mitigated by doing it methodically, route by route.

Blocked by: Phase 4 (multi-tenant schema must exist first).

Phase 6: On-Premise Packaging

Goal: Enterprise customers can deploy the admin on their own infrastructure.

  1. Create Dockerfile for admin (Next.js standalone build).
  2. Create docker-compose.yml — admin + Postgres + omega_platform.
  3. Implement PostgresStore (raw pg client, no Supabase dependency).
  4. Implement OIDCAuth for enterprise SSO (Okta, Azure AD).
  5. Implement offline license key validation.

Risk: Low — additive work, no changes to existing functionality.

Blocked by: Phase 5 (abstraction interfaces must exist).

Phase 4b: Onboarding Flow

Goal: New pro users can sign up, create a workspace, connect their CLI, and see data in the dashboard.

  1. Build signup flow (Supabase OAuth + email/password → user_accounts creation).
  2. Build workspace creation wizard (name → auto-slug → workspace type → create).
  3. Build "Connect CLI" page — generate workspace API key, display config snippet.
  4. Build omega link CLI command (or document manual config) for connecting local omega serve to dashboard.
  5. Build getting-started checklist (persistent sidebar widget, 6 items).
  6. Build BYOS database setup wizard (credentials → connection test → schema check → auto-migration → verification).

Risk: Medium — new UI and API work, but no existing functionality changes.

Blocked by: Phase 4 (workspaces table must exist), Phase 5 partially (auth abstraction helps but not required — can start with Supabase-only).

Rollback Strategy

Since Jason is the only user, rollback is straightforward:

  • Pre-migration Supabase snapshot: Before Phase 4 begins, create a Supabase project backup (Settings → Database → Backups). This is the recovery point if schema migration fails.
  • Phase 1-3 rollback: These are repo restructuring, not data changes. Revert via git if needed.
  • Phase 4 rollback: Restore Supabase from snapshot. Revert admin code to pre-migration branch.
  • Pause automated writes during NOT NULL cutover: The sync engine, hooks, and coordination all write data continuously. During the final Phase 4 step (setting workspace_id NOT NULL), temporarily stop omega serve to prevent rows without workspace_id from blocking the migration.

Phase Summary

PhaseDescriptionBlocked ByRiskShips Independently
1Extract memory repoMediumYes
1bResolve DIVERGED filesPhase 1HighYes (but must precede Phase 2)
2Restructure private repoPhase 1bHighYes
3Extract marketing siteLowYes
4Multi-tenant schemaPhase 2HighYes
4bOnboarding flowPhase 4MediumYes
5Auth & DB abstractionPhase 4MediumYes
6On-prem packagingPhase 5LowYes

Phases 1 and 3 can run in parallel. Phases 4b and 5 can run in parallel. Critical path: 1 → 1b → 2 → 4 → 5 → 6.

10. Dev Workflow

10.1 Local Development Setup

# Makefile (in singularityjason/omega)

setup:
    # Clone memory repo as sibling (if not already)
    [ -d ../omega-memory ] || git clone git@github.com:omega-memory/omega-memory.git ../omega-memory
    # Install memory as editable dependency
    pip install -e "../omega-memory[full]"
    # Install platform
    pip install -e ".[full]"
    # Install admin dependencies
    cd admin && npm install

test:
    cd ../omega-memory && pytest -x
    pytest -x
    cd admin && npm test

test-memory:
    cd ../omega-memory && pytest -x

test-platform:
    pytest -x

test-admin:
    cd admin && npm test

10.2 AI Agent Scoping

Each repo gets .claude/CLAUDE.md rules that constrain agent behavior:

omega-memory/.claude/CLAUDE.md:

This is the standalone OMEGA memory engine (Apache-2.0, public).
There is NO coordination, orchestration, hooks, admin dashboard,
or pro features in this repo.
Do NOT reference omega_platform, coord_*, or commercial features.
This code must work independently with zero cloud dependencies.

omega/.claude/CLAUDE.md:

This is the OMEGA platform (commercial, private).
Memory engine is a SEPARATE repo (omega-memory) — a pip dependency.
Do NOT modify memory code here.
Python orchestration code: src/omega_platform/
Admin dashboard: admin/
Marketing site is in a SEPARATE repo (omega-website).

omega-website/.claude/CLAUDE.md:

This is the OMEGA marketing website (omegamax.co).
Landing pages, blog, docs, pricing, comparison pages.
No admin dashboard code. No Python code. No pro features.
The admin dashboard is in a SEPARATE repo.

10.3 CI Pipelines

omega-memory (GitHub Actions):

  1. pytest -x — memory tests only
  2. Build wheel
  3. On tag: publish to PyPI

omega (GitHub Actions):

  1. Install omega-memory (released version from PyPI)
  2. pytest -x — platform tests
  3. cd admin && npm test — admin tests
  4. On main push: deploy admin to Vercel
  5. Nightly: test against omega-memory@main (catch breaking changes early)

omega-website (GitHub Actions):

  1. npm test
  2. On main push: deploy to Vercel

10.4 Release Flow

Memory change:

  1. PR to omega-memory → merge → tag vX.Y.Z → PyPI publishes automatically
  2. PR to omega → bump omega-memory version in pyproject.toml → merge

Platform/admin change:

  1. PR to omega → merge → admin auto-deploys to Vercel

Marketing change:

  1. PR to omega-website → merge → auto-deploys to omegamax.co

11. Open Questions

  1. Naming: Is omega_platform the right Python package name, or should it be omega_pro, omega_orchestrator, or something else?
  2. Michael's PRs: His Astro dashboard PR and cloud backend abstraction PR contain useful patterns (especially the OIDC auth abstraction and pluggable database backends). Should these be cherry-picked into the new architecture, or treated as reference implementations?
  3. Sync engine scope: The current cloud sync pushes all data to Supabase. In the new model, it needs to push with workspace_id and project_id tags. Should the sync engine move to omega_platform (private) or stay in omega-memory (public) as a generic capability?
  4. Marketing site deployment: Should omegamax.co/admin be a Vercel rewrite to a separate Vercel project, or should both apps deploy to the same Vercel project using Vercel Services? Note: cross-project path routing on the same domain may require specific Vercel plan features — verify before committing.
  5. Timeline: What's the target timeline for Phase 1-4? Phase 5-6 can wait for enterprise demand.
  6. BYOS security model: The "Bring Your Own Supabase" flow stores users' service role keys encrypted on the managed platform. A service role key bypasses RLS entirely. Need to define: encryption-at-rest mechanism, key management (who holds the master key?), and whether a less-privileged credential model (anon key + RLS) is sufficient for BYOS reads.
  7. Extension point design: Phase 1b (DIVERGED file resolution) requires designing extension points in bridge.py and mcp_server.py where none currently exist. This is the highest-risk technical design work and should be prototyped before committing to the full split.

12. References