ACT 01 · THE SHAPE OF THE PROBLEM

Memory for the AI
you already have.

Belay is a local-first memory layer for any MCP-capable AI client. Your preferences, history, and the assistant you've built over months — all on your hardware. Switch models, switch clients, switch devices. Your context comes with you.

one memory · any ai · your hardware
◆ FROM THE MANIFESTO
"You already pay for an AI you like. You hate that every new chat starts from scratch. You hate that when you switch to a different model for a different task, the relationship resets. Belay fixes this — not by selling you another AI, but by giving the one you already use a memory that is yours, lives on your hardware, and works across every tool you plug it into."
◆ THREE PRINCIPLES
01
Portable by design
Your memory lives in one place. Every MCP-capable AI client reads from it. Switch models; keep the relationship. No starting over when the next platform ships.
02
Local by default
Your memory lives on your hardware, not on our server. Exportable, deletable, auditable — genuinely yours. No vendor to lock you in.
03
Zero inference, at every layer
We never call a model. Not in the library. Not in the app. Not in the sync relay. Your LLM client does the thinking; Belay holds what it learns about you.
◆ PRIVATE ALPHA · SHIPPING SUMMER 2026

Start the relationship now.
You'll still have it when the model is yours.

We'll email you when the Belay desktop app is ready for beta — Mac, Windows, and Linux, all on day one.

◆ we send one email when it's ready · nothing else
◆ FOR DEVELOPERS

The protocol is open.

The deduplication engine, the canonicalization rules, the event schema, and the VectorStore interface are MIT-licensed and specified in our public PROTOCOL.md.

Build on it. Fork it. Reimplement it in another language. Our cross-language fixture pack will tell you if you got it right.

# Python — available on PyPI at launch
pip install belay-context-engine

# Protocol specification
curl getbelay.ai/protocol.md

# MIT-licensed core · zero LLM dependencies