Memory MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone <repo-url>
cd memory
npm install
npm run build
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add -e "TURSO_DATABASE_URL=${TURSO_DATABASE_URL}" -e "TURSO_AUTH_TOKEN=${TURSO_AUTH_TOKEN}" -e "OPENAI_API_KEY=${OPENAI_API_KEY}" memory-mcp -- node "<FULL_PATH_TO_MEMORY_MCP>/dist/index.js"

Replace <FULL_PATH_TO_MEMORY_MCP>/dist/index.js with the actual folder you prepared in step 1.

Required:TURSO_DATABASE_URLTURSO_AUTH_TOKENOPENAI_API_KEY
README.md

An MCP server that gives AI assistants persistent, semantic memory.

Memory MCP

A Model Context Protocol (MCP) server that gives AI assistants persistent, semantic memory. Backed by Turso (libSQL) for storage with vector search, and OpenAI for embeddings and LLM-powered query generation.

All interactions are in plain English. The server uses GPT-5 with function calling to translate natural language into the right database operations automatically.

Features

  • Remember — Store new memories with automatic duplicate detection, field extraction, and quality validation
  • Forget — Remove or modify memories by describing what to change
  • Recall — Search memories semantically or with structured queries, without modifying data
  • Process — Review and refine stored memories: merge duplicates, fill gaps, ask clarifying questions
  • Rejection system — The LLM will reject nonsensical, duplicate, contradictory, or low-quality memories with a structured reason and category
  • Vector search — Semantic similarity search using OpenAI embeddings (text-embedding-3-small, 1536 dimensions) with libSQL DiskANN indexes
  • Table isolation — Each use case gets its own table with custom freeform columns, all in one database
  • Claude Code integration — Slash commands for table management (/setup-table, /list-tables, /drop-table)

How It Works

┌─────────────┐     plain English      ┌─────────────┐     function calls     ┌───────────┐
│  MCP Client │ ──────────────────────► │   GPT-5     │ ──────────────────────► │  Turso DB │
│  (Claude)   │ ◄────────────────────── │  + prompts  │ ◄────────────────────── │  (libSQL) │
└─────────────┘     structured result   └─────────────┘     SQL + vectors      └───────────┘
  1. The MCP client sends a plain English request (e.g., "remember that user octocat prefers concise replies")
  2. The server loads the table schema and builds a system prompt with operation-specific instructions
  3. GPT-5 decides which internal tools to call (search, insert, update, delete, reject, or ask questions)
  4. An agentic loop executes tool calls against Turso, feeds results back to the LLM, and repeats for up to 5 rounds
  5. The final response is returned to the MCP client with success/rejection/questions status

Architecture

src/
├── index.ts           # MCP server entry point — tool definitions
├── llm.ts             # OpenAI wrapper — models, tool schemas, prompt loading
├── memory-ops.ts      # Agentic loop — tool execution, rejection, questions
├── db.ts              # Turso/libSQL client — queries, schema inspection
├── embeddings.ts      # OpenAI embeddings — text-embedding-3-small
├── table-setup.ts     # Table lifecycle — create, drop, list
└── prompts/
    ├── base.txt       # Shared context (table schema, column descriptions)
    ├── remember.txt   # Store operation instructions + rejection rules
    ├── forget.txt     # Delete/modify operation instructions
    ├── recall.txt     # Read-only search instructions
    └── process.txt    # Memory refinement and question-asking instructions

System prompts are stored as plain text files for easy editing and version control. They use {{TABLE_NAME}} and {{TABLE_SCHEMA}} placeholders that are replaced at runtime.

Requirements

  • Node.js 18+
  • A Turso database (or any libSQL-compatible endpoint)
  • An OpenAI API key

Installation

git clone <repo-url>
cd memory
npm install
npm run build

Environment Variables

Create a .env file (see .env.example):

TURSO_DATABASE_URL=libsql://your-db.turso.io
TURSO_AUTH_TOKEN=your-turso-auth-token
OPENAI_API_KEY=sk-your-openai-api-key

Creating Memory Tables

Each use case needs its own table. Use the Claude Code /setup-table command for an interactive setup, or create tables programmatically:

import { createMemoryTable } from "./src/table-setup.js";

await createMemoryTable("github_users", [
  { name: "username", type: "TEXT" },
  { name: "category", type: "TEXT" },
  { name: "importance", type: "TEXT" },
]);

Every table automatically gets these core columns:

Column Type Description
id INTEGER PRIMARY KEY Auto-incrementing ID
memory TEXT NOT NULL The memory content
embedding FLOAT32(1536) Vector embedding for semantic search
created_at TEXT NOT NULL ISO 8601 timestamp

Plus whatever freeform columns you define (TEXT, INTEGER, or REAL).

MCP Server Configuration

Add to your Claude Code MCP config (.claude/mcp.json or similar):

{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/path/to/memory-mcp/build/index.js"],
      "env": {
        "TURSO_DATABASE_URL": "libsql://your-db.turso.io",
        "TURSO_AUTH_TOKEN": "your-token",
        "OPENAI_API_KEY": "sk-your-key"
      }
    }
  }
}

Tool Reference

`remember`

Store a new memory. The LLM searches for

Tools (4)

rememberStore a new memory with automatic duplicate detection and quality validation.
forgetRemove or modify existing memories by describing what to change.
recallSearch memories semantically or with structured queries.
processReview and refine stored memories, merge duplicates, or fill gaps.

Environment Variables

TURSO_DATABASE_URLrequiredThe connection string for your Turso or libSQL database.
TURSO_AUTH_TOKENrequiredAuthentication token for your Turso database.
OPENAI_API_KEYrequiredAPI key for OpenAI embeddings and LLM operations.

Configuration

claude_desktop_config.json
{"mcpServers": {"memory": {"command": "node", "args": ["/path/to/memory-mcp/build/index.js"], "env": {"TURSO_DATABASE_URL": "libsql://your-db.turso.io", "TURSO_AUTH_TOKEN": "your-token", "OPENAI_API_KEY": "sk-your-key"}}}}

Try it

Remember that I prefer to use TypeScript for all new project prototypes.
Recall what I previously noted about the project requirements for the client meeting.
Forget the previous instruction about using Tailwind CSS for the landing page.
Process my stored memories to identify any duplicate entries regarding project deadlines.

Frequently Asked Questions

What are the key features of Memory MCP?

Persistent semantic memory using Turso and OpenAI embeddings. Automatic duplicate detection and quality validation for stored information. Natural language interface for storing, retrieving, and modifying data. Table isolation allowing multiple use cases in one database. Agentic loop for complex memory operations and refinement.

What can I use Memory MCP for?

Maintaining a persistent knowledge base of user preferences across coding sessions. Storing and retrieving project-specific requirements or documentation snippets. Managing a dynamic to-do list or task tracker that the AI can update and query. Refining and organizing unstructured notes into structured database entries.

How do I install Memory MCP?

Install Memory MCP by running: git clone <repo-url> && cd memory && npm install && npm run build

What MCP clients work with Memory MCP?

Memory MCP works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Memory MCP docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare