SuperLocalMemory MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
npm install -g superlocalmemory
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add superlocalmemory -- node "<FULL_PATH_TO_SUPERLOCALMEMORY>/dist/index.js"

Replace <FULL_PATH_TO_SUPERLOCALMEMORY>/dist/index.js with the actual folder you prepared in step 1.

README.md

The first local-only AI memory to break 74% retrieval on LoCoMo.

SuperLocalMemory V3

The first local-only AI memory to break 74% retrieval on LoCoMo.No cloud. No APIs. No data leaves your machine.

+16pp vs Mem0 (zero cloud)  ·  85% Open-Domain (best of any system)  ·  EU AI Act Ready


Why SuperLocalMemory?

Every major AI memory system — Mem0, Zep, Letta, EverMemOS — sends your data to cloud LLMs for core operations. That means latency on every query, cost on every interaction, and after August 2, 2026, a compliance problem under the EU AI Act.

SuperLocalMemory V3 takes a different approach: mathematics instead of cloud compute. Three techniques from differential geometry, algebraic topology, and stochastic analysis replace the work that other systems need LLMs to do — similarity scoring, contradiction detection, and lifecycle management. The result is an agent memory that runs entirely on your machine, on CPU, with no API keys, and still outperforms funded alternatives.

The numbers (evaluated on LoCoMo, the standard long-conversation memory benchmark):

System Score Cloud Required Open Source Funding
EverMemOS 92.3% Yes No
Hindsight 89.6% Yes No
SLM V3 Mode C 87.7% Optional Yes (MIT) $0
Zep v3 85.2% Yes Deprecated $35M
SLM V3 Mode A 74.8% No Yes (MIT) $0
Mem0 64.2% Yes Partial $24M

Mode A scores 74.8% with zero cloud dependency — outperforming Mem0 by 16 percentage points without a single API call. On open-domain questions, Mode A scores 85.0% — the highest of any system in the evaluation, including cloud-powered ones. Mode C reaches 87.7%, matching enterprise cloud systems.

Mathematical layers contribute +12.7 percentage points on average across 6 conversations (n=832 questions), with up to +19.9pp on the most challenging dialogues. This isn't more compute — it's better math.

Upgrading from V2 (2.8.6)? V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run slm migrate to upgrade your data. Read the Migration Guide before upgrading. Backup is created automatically.


Quick Start

Install via npm (recommended)

npm install -g superlocalmemory
slm setup     # Choose mode (A/B/C)
slm warmup    # Pre-download embedding model (~500MB, optional)

Install via pip

pip install superlocalmemory

First Use

slm remember "Alice works at Google as a Staff Engineer"
slm recall "What does Alice do?"
slm status

MCP Integration (Claude, Cursor, Windsurf, VS Code, etc.)

{
  "mcpServers": {
    "superlocalmemory": {
      "command": "slm",
      "args": ["mcp"]
    }
  }
}

24 MCP tools available. Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools.

Dual Interface: MCP + CLI

SLM works everywhere -- from IDEs to CI pipelines to Docker containers. The only AI memory system with both MCP and agent-native CLI.

Need Use Example
IDE integration MCP Auto-configured for 17+ IDEs via slm connect
Shell script

Tools (3)

rememberStore information in the local memory system.
recallRetrieve information from the local memory system.
statusCheck the status of the memory system.

Configuration

claude_desktop_config.json
{"mcpServers": {"superlocalmemory": {"command": "slm", "args": ["mcp"]}}}

Try it

Remember that I prefer to use Python for data analysis tasks.
Recall what I previously mentioned about the project requirements for the Q3 update.
What do you know about the client's feedback from our last meeting?
Check the status of my local memory system to ensure it is running correctly.

Frequently Asked Questions

What are the key features of SuperLocalMemory?

4-channel retrieval using semantic, BM25, entity graph, and temporal analysis. Zero cloud dependency with no data leaving the local machine. EU AI Act compliant architecture. Dual interface support via MCP and agent-native CLI. High retrieval performance on LoCoMo benchmark.

What can I use SuperLocalMemory for?

Maintaining persistent context across long-running AI coding sessions in VS Code or Cursor. Storing project-specific preferences and constraints without sending data to third-party cloud APIs. Ensuring AI memory compliance for enterprise environments subject to the EU AI Act. Managing cross-session knowledge for local AI agents in offline or air-gapped environments.

How do I install SuperLocalMemory?

Install SuperLocalMemory by running: npm install -g superlocalmemory

What MCP clients work with SuperLocalMemory?

SuperLocalMemory works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep SuperLocalMemory docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare