Persistent memory for AI coding assistants
@doclea/mcp
Local MCP server for Doclea - persistent memory for AI coding assistants.
Installation
Prerequisites
Step 1: Clone and Build
git clone https://github.com/your-org/doclea.git
cd doclea/packages/doclea-mcp
# Install dependencies
bun install
# Download embedding model (first time only, ~130MB)
./scripts/setup-models.sh
# Build
bun run build
Step 2: Start Services
# Start Qdrant + Embeddings
bun run docker:up
# Verify services
curl http://localhost:6333/readyz # Should return "ok"
curl http://localhost:8080/health # Should return "ok"
Step 3: Add to Claude Code
Option A: Claude Code CLI (~/.claude.json or project .claude.json):
{
"mcpServers": {
"doclea": {
"command": "bun",
"args": ["run", "/absolute/path/to/doclea/packages/doclea-mcp/dist/index.js"]
}
}
}
Option B: Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"doclea": {
"command": "bun",
"args": ["run", "/absolute/path/to/doclea/packages/doclea-mcp/dist/index.js"]
}
}
}
Option C: For development (uses source directly):
{
"mcpServers": {
"doclea": {
"command": "bun",
"args": ["run", "/absolute/path/to/doclea/packages/doclea-mcp/src/index.ts"]
}
}
}
Step 4: Restart Claude Code
After updating config, restart Claude Code to load the MCP server.
Step 5: Initialize Your Project
In Claude Code, navigate to your project and ask:
Initialize doclea for this project
This scans your codebase, git history, and documentation to bootstrap memories.
Usage
Once installed, Claude Code automatically has access to these tools:
Store a Decision
Store this as a decision: We're using PostgreSQL because we need ACID
compliance for financial transactions. Tag it with "database" and "infrastructure".
Search for Context
Search memories for authentication patterns
Generate Commit Message
Generate a commit message for my staged changes
Generate PR Description
Create a PR description for this branch
Find Code Experts
Who should review changes to src/auth/?
Generate Changelog
Generate a changelog from v1.0.0 to HEAD for users
Configuration
Create .doclea/config.json in your project root (optional - uses defaults):
{
"embedding": {
"provider": "local",
"endpoint": "http://localhost:8080"
},
"qdrant": {
"url": "http://localhost:6333",
"collectionName": "doclea_memories"
},
"storage": {
"dbPath": ".doclea/local.db"
}
}
Embedding Providers
| Provider | Config |
|---|---|
| local (default) | { "provider": "local", "endpoint": "http://localhost:8080" } |
| openai | { "provider": "openai", "apiKey": "sk-...", "model": "text-embedding-3-small" } |
| nomic | { "provider": "nomic", "apiKey": "...", "model": "nomic-embed-text-v1.5" } |
| voyage | { "provider": "voyage", "apiKey": "...", "model": "voyage-3" } |
| ollama | { "provider": "ollama", "endpoint": "http://localhost:11434", "model": "nomic-embed-text" } |
MCP Tools Reference
Memory Tools
| Tool | Description |
|---|---|
doclea_store |
Store a memory (decision, solution, pattern, architecture, note) |
doclea_search |
Semantic search across memories |
doclea_get |
Get memory by ID |
doclea_update |
Update existing memory |
doclea_delete |
Delete memory |
Git Tools
| Tool | Description |
|---|---|
doclea_commit_message |
Generate conventional commit from staged changes |
doclea_pr_description |
Generate PR description with context |
doclea_changelog |
Generate changelog between refs (markdown/json, developers/users) |
Expertise Tools
| Tool | Description |
|---|---|
doclea_expertise |
Map codebase expertise, identify bus factor risks |
doclea_suggest_reviewers |
Suggest PR reviewers based on file ownership |
Bootstrap Tools
| Tool | Description |
|---|---|
doclea_init |
Initialize project, scan git history, docs, and code |
doclea_import |
Import from markdown files or ADRs |
Memory Types
- decision - Architectural decisions, technology choices
- solution - Bug fixes, problem resolutions
- pattern - Code patterns, conventions
- architecture - System design notes
- note - General documentation
Troubleshooting
Docker services not starting
# Check logs
docker compose -f docker-compose.test.yml logs
# Restart
bun run docker:down
bun run docker:up
First startup is slow
The embeddings service downloads the model (~130MB) on first run. After that, it's cache
Tools (12)
doclea_storeStore a memory (decision, solution, pattern, architecture, note)doclea_searchSemantic search across memoriesdoclea_getGet memory by IDdoclea_updateUpdate existing memorydoclea_deleteDelete memorydoclea_commit_messageGenerate conventional commit from staged changesdoclea_pr_descriptionGenerate PR description with contextdoclea_changelogGenerate changelog between refsdoclea_expertiseMap codebase expertise, identify bus factor risksdoclea_suggest_reviewersSuggest PR reviewers based on file ownershipdoclea_initInitialize project, scan git history, docs, and codedoclea_importImport from markdown files or ADRsEnvironment Variables
OPENAI_API_KEYAPI key for OpenAI embedding providerConfiguration
{"mcpServers": {"doclea": {"command": "bun", "args": ["run", "/absolute/path/to/doclea/packages/doclea-mcp/dist/index.js"]}}}