Smriti MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone https://github.com/Smriti-AA/smriti
cd smriti

Then follow the repository README for any remaining dependency or build steps before continuing.

2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add smriti -- node "<FULL_PATH_TO_SMRITI>/dist/index.js"

Replace <FULL_PATH_TO_SMRITI>/dist/index.js with the actual folder you prepared in step 1.

README.md

A lightning-fast, self-hosted knowledge store and memory layer for AI agents

Smriti

Sanskrit: स्मृति — memory, remembrance A lightning-fast, self-hosted knowledge store and memory layer for AI agents.

Why Smriti?

Every AI agent needs memory. Mem0 is cloud-only. Letta is research-heavy. Neither has a knowledge graph.

Smriti is different: self-hosted, knowledge-graph-native, MCP-ready, and fast enough to handle millions of operations. Your data never leaves your machine.

Key Features

  • MCP Server — Plug into Claude, GPT, or any MCP-compatible agent instantly
  • Knowledge Graph — Notes auto-link via [[wiki-links]]; agents discover connections via graph traversal
  • Agent Memory — Key-value store with namespaces, TTL, and tool execution logs
  • Full-Text Search — SQLite FTS5 with sub-millisecond queries
  • REST API — Full CRUD + graph + agent endpoints on Axum
  • Self-Hosted — SQLite database, no cloud dependency, no API costs
  • Sync — Cross-device via Synology NAS, WebDAV, or any filesystem mount

How It Compares

Feature Smriti Mem0 Letta LangMem
Self-hosted Yes No (cloud) Yes Partial
Knowledge graph Yes No No No
MCP native Yes No No No
Wiki-links Yes No No No
Full-text search FTS5 Vector Vector Vector
Language Rust Python Python Python
TTL support Yes No No No

Quick Start

cargo install smriti
# Create notes with wiki-links — connections are automatic
smriti create "LLM Architecture" \
  --content "Transformers use [[Attention Mechanisms]] for [[Parallel Processing]]"

smriti create "Attention Mechanisms" \
  --content "Self-attention is the core of [[LLM Architecture]]. See also #transformers"

# Search across all notes
smriti search "attention"

# View the knowledge graph
smriti graph

# Start the MCP server (for AI agents)
smriti mcp

# Start the REST API
smriti serve --port 3000

Build from source

git clone https://github.com/smriti-AA/smriti.git
cd smriti
cargo build --release
./target/release/smriti --help

MCP Server

Start with smriti mcp. Agents communicate via JSON-RPC 2.0 over stdio.

8 tools available to agents:

Tool Description
notes_create Create a note with markdown content. [[wiki-links]] and #tags are auto-detected
notes_read Read note by ID or title
notes_search Full-text search across all notes
notes_list List recent notes, optionally filtered by tag
notes_graph Get full knowledge graph or subgraph around a note
memory_store Store key-value memory with optional namespace and TTL
memory_retrieve Retrieve a memory by agent ID, namespace, and key
memory_list List all memory entries for an agent

Claude Desktop Integration

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "smriti": {
      "command": "smriti",
      "args": ["mcp", "--db", "/path/to/smriti.db"]
    }
  }
}

Example: Agent Stores and Retrieves Memory

# Agent stores a finding
echo '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"memory_store","arguments":{"agent_id":"researcher-1","key":"finding","value":"Transformers scale logarithmically with data size"}}}' | smriti mcp

# Agent creates a linked note
echo '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"notes_create","arguments":{"title":"Scaling Laws","content":"Key insight: [[Transformer]] performance scales logarithmically. Related to [[Chinchilla]] findings."}}}' | smriti mcp

CLI Reference

smriti create <title>       Create a note (--content, --file, --tags)
smriti read <id>            Read a note by ID or title (--json)
smriti list                 List notes (--limit, --tag, --json)
smriti search <query>       Full-text search (--limit)
smriti graph                Knowledge graph (--format json|dot|text, --center)
smriti stats                Database stats + smart link suggestions
smriti serve                REST API server (--host, --port)
smriti mcp                  MCP server over stdio
smriti sync                 Sync with remote (--remote, --direction push|pull|both)
smriti import <dir>         Import .md files (--recursive)
smriti export <dir>         Export to .md files (--frontmatter)

REST API

Start with `smriti serve --port 300

Tools (8)

notes_createCreate a note with markdown content, auto-detecting wiki-links and tags.
notes_readRead a note by its ID or title.
notes_searchPerform a full-text search across all stored notes.
notes_listList recent notes, with optional filtering by tag.
notes_graphRetrieve the full knowledge graph or a subgraph centered around a specific note.
memory_storeStore key-value memory with optional namespace and TTL.
memory_retrieveRetrieve a memory entry by agent ID, namespace, and key.
memory_listList all memory entries associated with a specific agent.

Configuration

claude_desktop_config.json
{"mcpServers": {"smriti": {"command": "smriti", "args": ["mcp", "--db", "/path/to/smriti.db"]}}}

Try it

Create a new note titled 'Project Roadmap' with content about our upcoming milestones and link it to [[Q4 Goals]].
Search my notes for any information related to 'Attention Mechanisms' and summarize the connections found.
Store a new memory for agent 'researcher-1' with the key 'preferred_format' and value 'markdown'.
List all notes tagged with #transformers to help me review my research.
Show me the knowledge graph connections for the note 'Scaling Laws'.

Frequently Asked Questions

What are the key features of Smriti?

Knowledge-graph-native storage with automatic wiki-link detection. SQLite-based full-text search (FTS5) for sub-millisecond queries. Agent memory management with namespaces and TTL support. MCP-ready interface for seamless integration with AI agents. Cross-device synchronization via filesystem mounts.

What can I use Smriti for?

Building a persistent, self-hosted memory layer for local AI agents. Managing a personal knowledge base with automatic graph discovery. Storing agent-specific configuration and findings across sessions. Linking research notes and documents for better AI context retrieval.

How do I install Smriti?

Install Smriti by running: cargo install smriti

What MCP clients work with Smriti?

Smriti works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Smriti docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare