Open Brain MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add -e "OPENROUTER_API_KEY=${OPENROUTER_API_KEY}" open-brain -- npx tsx /path/to/mcp-server/src/stdio.ts
Required:OPENROUTER_API_KEY+ 5 optional
README.md

A personal semantic knowledge base exposed as MCP tools.

Open Brain MCP Server

A personal semantic knowledge base exposed as MCP tools. Store, search, and retrieve memories using natural language across Cursor, Claude Desktop, or any MCP-compatible client.

Tools

Tool Description
search_brain Semantic similarity search across all memories
add_memory Embed and store a new piece of knowledge
recall Filtered list retrieval by source, tags, or date — no embedding needed
forget Delete a memory by UUID
brain_stats Counts and breakdown by source
discover_tools Semantic search across the tool registry (Toolshed)
index_cursor_chats Index Cursor agent transcripts as searchable work history
search_work_history Keyword search across raw Cursor transcript files

Setup

cd mcp-server
npm install
cp .env.example .env
# edit .env with your credentials

Configuration

All configuration is via environment variables in .env.

Required (always)

Variable Description
OPENROUTER_API_KEY Used to generate embeddings via OpenRouter

Database backend

The server supports two database backends. Set DB_BACKEND to choose (default: supabase).

Supabase (default)
DB_BACKEND=supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
Raw Postgres

Point the server at any Postgres instance with the pgvector extension and the brain_memories schema applied.

DB_BACKEND=postgres
DATABASE_URL=postgresql://user:password@host:5432/dbname

Both backends use the same schema and the same match_memories SQL function. See Database Schema below.

Optional

Variable Default Description
EMBEDDING_MODEL openai/text-embedding-3-small OpenRouter embedding model
EMBEDDING_DIMENSIONS 1536 Must match the model output and schema
MCP_HTTP_PORT 3100 Port for the HTTP/SSE transport
CURSOR_TRANSCRIPTS_DIR Path to Cursor agent-transcripts directory; enables index_cursor_chats and search_work_history

Running

stdio transport (Cursor / Claude Desktop)

npm run dev:stdio       # development (tsx)
npm run start:stdio     # production (compiled JS)

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "open-brain": {
      "command": "npx",
      "args": ["tsx", "/path/to/mcp-server/src/stdio.ts"],
      "env": {
        "DB_BACKEND": "supabase",
        "SUPABASE_URL": "...",
        "SUPABASE_SERVICE_ROLE_KEY": "...",
        "OPENROUTER_API_KEY": "..."
      }
    }
  }
}

To use raw Postgres instead, swap the env block:

{
  "env": {
    "DB_BACKEND": "postgres",
    "DATABASE_URL": "postgresql://user:pass@host:5432/dbname",
    "OPENROUTER_API_KEY": "..."
  }
}

HTTP / SSE transport (network-accessible)

npm run dev:http        # development
npm run start:http      # production

Endpoints:

Endpoint Description
GET /sse SSE stream (MCP SSE transport)
POST /messages MCP message handling
GET /health Health check

Database Schema

Both backends require the following on the Postgres instance:

  • pgvector extension (for halfvec type)
  • brain_memories table
  • match_memories SQL function
  • brain_stats view

Schema is managed via the migrations in supabase/migrations/. For a raw Postgres instance, run the migration files in order against your database:

001_initial_schema.sql
002_open_brain.sql
003_brain_rls.sql
004_vector_halfvec.sql
005_uuid_default.sql
006_storage_fillfactor.sql
007_column_reorder.sql

brain_memories table

CREATE TABLE brain_memories (
  id              uuid          NOT NULL DEFAULT gen_random_uuid(),
  created_at      timestamptz            DEFAULT NOW(),
  updated_at      timestamptz            DEFAULT NOW(),
  source          text          NOT NULL DEFAULT 'manual',
  content         text          NOT NULL,
  tags            text[]                 DEFAULT '{}',
  source_metadata jsonb                  DEFAULT '{}',
  embedding       halfvec(1536)
);

Valid source values: manual, telegram, cursor, api, conversations, knowledge, work_history, toolshed.


Toolshed

The Toolshed (discover_tools) solves the "tool explosion" problem. Instead of injecting hundreds of MCP tool schemas into the agent context, the agent calls discover_tools with a natural language query and gets back only the tools relevant to the current task.

Tool descriptions are loaded from tool-registry.json and embedded into brain_memories (source toolshed) at startup. Indexing is idempotent.


Work History Indexing

When CURSOR_TRANSCRIPTS_DIR is set, two additional tools are enabled:

  • index_cursor_chats — reads JSONL transcript files from the directory, embeds each session summary, and stores it as a `work_history

Tools (8)

search_brainSemantic similarity search across all memories
add_memoryEmbed and store a new piece of knowledge
recallFiltered list retrieval by source, tags, or date
forgetDelete a memory by UUID
brain_statsCounts and breakdown by source
discover_toolsSemantic search across the tool registry
index_cursor_chatsIndex Cursor agent transcripts as searchable work history
search_work_historyKeyword search across raw Cursor transcript files

Environment Variables

OPENROUTER_API_KEYrequiredUsed to generate embeddings via OpenRouter
DB_BACKENDDatabase backend (supabase or postgres)
SUPABASE_URLURL for Supabase project
SUPABASE_SERVICE_ROLE_KEYService role key for Supabase
DATABASE_URLPostgres connection string
CURSOR_TRANSCRIPTS_DIRPath to Cursor agent-transcripts directory

Configuration

claude_desktop_config.json
{"mcpServers": {"open-brain": {"command": "npx", "args": ["tsx", "/path/to/mcp-server/src/stdio.ts"], "env": {"DB_BACKEND": "supabase", "SUPABASE_URL": "...", "SUPABASE_SERVICE_ROLE_KEY": "...", "OPENROUTER_API_KEY": "..."}}}}

Try it

Search my brain for the notes I saved about the project architecture.
Add a new memory: The API endpoint for the production server is https://api.example.com.
What are the statistics of my stored memories by source?
Find all memories tagged with 'project-alpha' from last week.
Search my work history for the conversation where we discussed the database migration.

Frequently Asked Questions

What are the key features of Open Brain?

Semantic similarity search for personal knowledge retrieval. Support for Supabase or raw Postgres with pgvector. Automatic indexing of Cursor agent transcripts. Tool discovery registry to manage large tool sets. Memory management including adding, recalling, and deleting.

What can I use Open Brain for?

Maintaining a searchable personal knowledge base of project notes and snippets. Indexing past Cursor AI agent conversations to recall previous coding decisions. Managing a large registry of MCP tools by searching for relevant ones on demand. Tracking work history and project progress through automated transcript indexing.

How do I install Open Brain?

Install Open Brain by running: npm install

What MCP clients work with Open Brain?

Open Brain works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Open Brain docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare