Skip to main content

Goose Grant Application Draft

Review this, then I'll fill and submit the form. Edit anything you want changed.

Page 1: Basics

Name: Jason Sosa

Email: jason@element1.ai

Country: Singapore

Note: Submitted as "United States" — needs correction. Email goose grants team to update to Singapore for consistency with other applications (NLnet, STF, etc.). The stichting is registered in Netherlands; applicant is based in Singapore.

Page 2: Open Source

Have you contributed to Goose yet? No

Links to Goose PRs: N/A

Links to other open source contributions:

https://github.com/omega-memory/omega-memory - OMEGA Memory: open-source persistent memory system for AI agents (Apache 2.0). 2,150+ tests, 29K lines of source.
https://pypi.org/project/omega-memory/ - Published on PyPI (v0.9.3), pip-installable MCP server.
https://github.com/omega-memory/memorystress - MemoryStress: open-source longitudinal memory benchmark (583 facts, 1000 sessions, 300 questions). Published on HuggingFace.

Page 3: Project Details

Project Title: OMEGA: Persistent Memory and Coordination for Goose Agents

Project Description:

OMEGA is an open-source, local-first persistent memory system for AI agents, built as an MCP server. It scores 95.4% on LongMemEval (the ICLR 2025 standard benchmark for agent memory), ranking #1 on the public leaderboard. It provides semantic search, temporal reasoning, cross-session learning, and multi-agent coordination, all running locally with zero cloud dependencies.

Goose is the ideal host for OMEGA. As an open-source agent framework with native MCP support, Goose can immediately benefit from persistent memory: agents that remember user preferences across sessions, learn from past mistakes, coordinate on shared tasks, and resume interrupted work without context loss.

This grant would fund three workstreams:

1. First-class Goose integration: a Goose toolkit/extension that makes OMEGA a one-command install for any Goose user. Automatic session hooks, welcome briefings, and context-aware memory retrieval tuned for Goose's agent patterns.

2. Memory accuracy improvements: close the gap on multi-session reasoning (currently 65.4%) and preference extraction (50%) through enhanced retrieval strategies and cross-session linking. Target: 90%+ overall on LongMemEval.

3. Multi-agent coordination for Goose: OMEGA already has 34 coordination tools (file claims, branch guards, task management, peer messaging, deadlock detection). This work would integrate them natively with Goose's multi-agent patterns so teams of Goose agents can collaborate without stepping on each other.

OMEGA is fully built and production-tested: 2,150+ passing tests, 29,000 lines of source code, published on PyPI. This grant accelerates ecosystem integration and accuracy, not initial development.

Page 4: Experience

Prior open source experience? Yes

GitHub or portfolio link: https://github.com/omega-memory

Expected deliverables and milestones:

Q1 (Months 1-3): Goose Integration
- Build and publish an OMEGA Goose toolkit/extension with one-command setup
- Automatic session lifecycle hooks (memory capture on task completion, context loading on session start)
- Documentation and quickstart guide for Goose users
- Deliverable: Published Goose extension, installable via goose toolkit install

Q2 (Months 4-6): Memory Accuracy Sprint
- Improve multi-session reasoning from 65.4% to 80%+ (enhanced graph traversal, cross-session linking)
- Improve preference extraction from 50% to 75%+ (dedicated preference memory pipeline)
- Improve temporal reasoning from 70.7% to 85%+ (temporal indexing improvements)
- Deliverable: LongMemEval score improvement to 90%+, published benchmark results

Q3 (Months 7-9): Multi-Agent Coordination
- Integrate OMEGA's 34 coordination tools with Goose multi-agent patterns
- File claim system, branch guards, and task handoff optimized for Goose agent teams
- Real-world testing with multi-Goose-agent development workflows
- Deliverable: Coordination toolkit for Goose, demo repository with multi-agent examples

Q4 (Months 10-12): Ecosystem and Community
- IDE plugins for Goose+OMEGA (VS Code, JetBrains, Cursor)
- Interactive onboarding tutorials and video walkthroughs
- Community contributor program and integration bounties
- Deliverable: Published plugins, documentation site, contributor guide

Page 5: Commitment

How does your project align with Goose's values of openness, modularity, and user empowerment?

Openness: OMEGA is Apache 2.0 licensed with foundation governance through Kokyo Keisho Zaidan Stichting. All core memory, retrieval, and coordination code is open source. The LongMemEval benchmark results are publicly reproducible, and I published my own benchmark (MemoryStress) as open data on HuggingFace.

Modularity: OMEGA is built as a standard MCP server, not a monolithic framework. It integrates with any MCP-compatible client, including Goose, through the protocol layer. Each capability (memory, coordination, routing) is a separate module. Users can adopt just the memory tools, or the full suite. This is infrastructure, not lock-in.

User empowerment: OMEGA is local-first by design. All data stays on the user's machine in a SQLite database. Embeddings run on CPU via ONNX. Profiles are encrypted via the system keyring. There are zero cloud dependencies for core operation. Users own their memory, their data never leaves their device, and they can export everything at any time. This directly addresses the #1 enterprise barrier to AI adoption: data privacy.

What impact do you expect this project to have on the open source AI community?

Persistent memory is the missing infrastructure layer for AI agents. Every agent framework, including Goose, currently treats each session as a blank slate. OMEGA changes this by providing a standard, open-source memory layer that any agent can use.

Concrete expected impact:

1. Goose agents that learn and improve over time, rather than resetting every session. Developers using Goose+OMEGA will stop losing context, preferences, and debugging insights between sessions.

2. A reference implementation for agent memory that the broader MCP ecosystem can build on. With 10,000+ MCP servers in the ecosystem, there is no standard approach to persistence. OMEGA can establish the pattern.

3. Multi-agent coordination becomes accessible. 80% of enterprises plan multi-agent deployments but less than 10% succeed, largely because coordination infrastructure doesn't exist. OMEGA's coordination tools, integrated with Goose, lower that barrier.

4. An open benchmark culture. I published MemoryStress (the first longitudinal memory benchmark) as open data. All LongMemEval results are reproducible. This pushes the ecosystem toward measurable, comparable memory quality rather than marketing claims.

How will you ensure your work remains open source?

Three structural guarantees:

1. License: OMEGA core is Apache 2.0 and will remain so. The license is already published and cannot be revoked for existing versions.

2. Foundation governance: OMEGA is governed by Kokyo Keisho Zaidan Stichting, a Dutch foundation (stichting). The foundation's charter requires that core infrastructure remains open source. This is a legal commitment, not just a promise.

3. Public development: All development happens in the open on GitHub (github.com/omega-memory/omega-memory). Issues, PRs, roadmap, and benchmarks are public. Grant deliverables will be developed in public repositories with open review processes.

Is this work already funded? No

Can you commit to 12 months? Yes

Additional information:

OMEGA has been built entirely without external funding. The 95.4% LongMemEval score, 2,150+ test suite, and 29,000 lines of production code represent approximately 6 months of focused development.

What makes this grant application unique: OMEGA is not a proposal for something to be built. It is a working system seeking funding to deepen its integration with the Goose ecosystem and push accuracy to state-of-the-art levels. The core technology risk is zero; this is execution and ecosystem work.

The Goose+OMEGA combination would be the first open-source AI agent with production-grade persistent memory, setting a new standard for what developer-facing AI agents can do.

Page 6: Agreement

  • I understand this is not employment and does not imply future hiring.
  • I agree to quarterly progress reviews and milestone-based payouts
  • I commit to keeping this work open source