The first local-only AI memory to break 74% retrieval on LoCoMo.
SuperLocalMemory V3
The first local-only AI memory to break 74% retrieval on LoCoMo.No cloud. No APIs. No data leaves your machine. +16pp vs Mem0 (zero cloud) · 85% Open-Domain (best of any system) · EU AI Act Ready
Why SuperLocalMemory?
Every major AI memory system — Mem0, Zep, Letta, EverMemOS — sends your data to cloud LLMs for core operations. That means latency on every query, cost on every interaction, and after August 2, 2026, a compliance problem under the EU AI Act.
SuperLocalMemory V3 takes a different approach: mathematics instead of cloud compute. Three techniques from differential geometry, algebraic topology, and stochastic analysis replace the work that other systems need LLMs to do — similarity scoring, contradiction detection, and lifecycle management. The result is an agent memory that runs entirely on your machine, on CPU, with no API keys, and still outperforms funded alternatives.
The numbers (evaluated on LoCoMo, the standard long-conversation memory benchmark):
| System | Score | Cloud Required | Open Source | Funding |
|---|---|---|---|---|
| EverMemOS | 92.3% | Yes | No | — |
| Hindsight | 89.6% | Yes | No | — |
| SLM V3 Mode C | 87.7% | Optional | Yes (MIT) | $0 |
| Zep v3 | 85.2% | Yes | Deprecated | $35M |
| SLM V3 Mode A | 74.8% | No | Yes (MIT) | $0 |
| Mem0 | 64.2% | Yes | Partial | $24M |
Mode A scores 74.8% with zero cloud dependency — outperforming Mem0 by 16 percentage points without a single API call. On open-domain questions, Mode A scores 85.0% — the highest of any system in the evaluation, including cloud-powered ones. Mode C reaches 87.7%, matching enterprise cloud systems.
Mathematical layers contribute +12.7 percentage points on average across 6 conversations (n=832 questions), with up to +19.9pp on the most challenging dialogues. This isn't more compute — it's better math.
Upgrading from V2 (2.8.6)? V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run
slm migrateto upgrade your data. Read the Migration Guide before upgrading. Backup is created automatically.
Quick Start
Install via npm (recommended)
npm install -g superlocalmemory
slm setup # Choose mode (A/B/C)
slm warmup # Pre-download embedding model (~500MB, optional)
Install via pip
pip install superlocalmemory
First Use
slm remember "Alice works at Google as a Staff Engineer"
slm recall "What does Alice do?"
slm status
MCP Integration (Claude, Cursor, Windsurf, VS Code, etc.)
{
"mcpServers": {
"superlocalmemory": {
"command": "slm",
"args": ["mcp"]
}
}
}
24 MCP tools available. Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools.
Dual Interface: MCP + CLI
SLM works everywhere -- from IDEs to CI pipelines to Docker containers. The only AI memory system with both MCP and agent-native CLI.
| Need | Use | Example |
|---|---|---|
| IDE integration | MCP | Auto-configured for 17+ IDEs via slm connect |
| Shell script |
Tools (3)
rememberStore information in the local memory system.recallRetrieve information from the local memory system.statusCheck the status of the memory system.Configuration
{"mcpServers": {"superlocalmemory": {"command": "slm", "args": ["mcp"]}}}