An MCP server that gives AI assistants persistent, semantic memory.
Memory MCP
A Model Context Protocol (MCP) server that gives AI assistants persistent, semantic memory. Backed by Turso (libSQL) for storage with vector search, and OpenAI for embeddings and LLM-powered query generation.
All interactions are in plain English. The server uses GPT-5 with function calling to translate natural language into the right database operations automatically.
Features
- Remember — Store new memories with automatic duplicate detection, field extraction, and quality validation
- Forget — Remove or modify memories by describing what to change
- Recall — Search memories semantically or with structured queries, without modifying data
- Process — Review and refine stored memories: merge duplicates, fill gaps, ask clarifying questions
- Rejection system — The LLM will reject nonsensical, duplicate, contradictory, or low-quality memories with a structured reason and category
- Vector search — Semantic similarity search using OpenAI embeddings (text-embedding-3-small, 1536 dimensions) with libSQL DiskANN indexes
- Table isolation — Each use case gets its own table with custom freeform columns, all in one database
- Claude Code integration — Slash commands for table management (
/setup-table,/list-tables,/drop-table)
How It Works
┌─────────────┐ plain English ┌─────────────┐ function calls ┌───────────┐
│ MCP Client │ ──────────────────────► │ GPT-5 │ ──────────────────────► │ Turso DB │
│ (Claude) │ ◄────────────────────── │ + prompts │ ◄────────────────────── │ (libSQL) │
└─────────────┘ structured result └─────────────┘ SQL + vectors └───────────┘
- The MCP client sends a plain English request (e.g., "remember that user octocat prefers concise replies")
- The server loads the table schema and builds a system prompt with operation-specific instructions
- GPT-5 decides which internal tools to call (search, insert, update, delete, reject, or ask questions)
- An agentic loop executes tool calls against Turso, feeds results back to the LLM, and repeats for up to 5 rounds
- The final response is returned to the MCP client with success/rejection/questions status
Architecture
src/
├── index.ts # MCP server entry point — tool definitions
├── llm.ts # OpenAI wrapper — models, tool schemas, prompt loading
├── memory-ops.ts # Agentic loop — tool execution, rejection, questions
├── db.ts # Turso/libSQL client — queries, schema inspection
├── embeddings.ts # OpenAI embeddings — text-embedding-3-small
├── table-setup.ts # Table lifecycle — create, drop, list
└── prompts/
├── base.txt # Shared context (table schema, column descriptions)
├── remember.txt # Store operation instructions + rejection rules
├── forget.txt # Delete/modify operation instructions
├── recall.txt # Read-only search instructions
└── process.txt # Memory refinement and question-asking instructions
System prompts are stored as plain text files for easy editing and version control. They use {{TABLE_NAME}} and {{TABLE_SCHEMA}} placeholders that are replaced at runtime.
Requirements
Installation
git clone <repo-url>
cd memory
npm install
npm run build
Environment Variables
Create a .env file (see .env.example):
TURSO_DATABASE_URL=libsql://your-db.turso.io
TURSO_AUTH_TOKEN=your-turso-auth-token
OPENAI_API_KEY=sk-your-openai-api-key
Creating Memory Tables
Each use case needs its own table. Use the Claude Code /setup-table command for an interactive setup, or create tables programmatically:
import { createMemoryTable } from "./src/table-setup.js";
await createMemoryTable("github_users", [
{ name: "username", type: "TEXT" },
{ name: "category", type: "TEXT" },
{ name: "importance", type: "TEXT" },
]);
Every table automatically gets these core columns:
| Column | Type | Description |
|---|---|---|
id |
INTEGER PRIMARY KEY | Auto-incrementing ID |
memory |
TEXT NOT NULL | The memory content |
embedding |
FLOAT32(1536) | Vector embedding for semantic search |
created_at |
TEXT NOT NULL | ISO 8601 timestamp |
Plus whatever freeform columns you define (TEXT, INTEGER, or REAL).
MCP Server Configuration
Add to your Claude Code MCP config (.claude/mcp.json or similar):
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/path/to/memory-mcp/build/index.js"],
"env": {
"TURSO_DATABASE_URL": "libsql://your-db.turso.io",
"TURSO_AUTH_TOKEN": "your-token",
"OPENAI_API_KEY": "sk-your-key"
}
}
}
}
Tool Reference
`remember`
Store a new memory. The LLM searches for
Tools (4)
rememberStore a new memory with automatic duplicate detection and quality validation.forgetRemove or modify existing memories by describing what to change.recallSearch memories semantically or with structured queries.processReview and refine stored memories, merge duplicates, or fill gaps.Environment Variables
TURSO_DATABASE_URLrequiredThe connection string for your Turso or libSQL database.TURSO_AUTH_TOKENrequiredAuthentication token for your Turso database.OPENAI_API_KEYrequiredAPI key for OpenAI embeddings and LLM operations.Configuration
{"mcpServers": {"memory": {"command": "node", "args": ["/path/to/memory-mcp/build/index.js"], "env": {"TURSO_DATABASE_URL": "libsql://your-db.turso.io", "TURSO_AUTH_TOKEN": "your-token", "OPENAI_API_KEY": "sk-your-key"}}}}