Wasurenagusa MCP Server

Teach your AI coding agent to learn from its mistakes.

README.md

wasurenagusa

Teach your AI coding agent to learn from its mistakes.

wasurenagusa (forget-me-not) — a Japanese flower whose name means "don't forget me."


The Problem

AI coding agents are powerful but amnesiac. Every session starts from scratch — your project conventions, past decisions, and hard-learned lessons vanish the moment a session ends.

Existing solutions either require manual effort or simply store raw memories that grow until they overwhelm the context window.

The Solution

wasurenagusa is an MCP server that doesn't just remember — it learns.

  1. Detects mistakes automatically — Catches retry patterns, user frustration, and repeated failures
  2. Distills lessons into principles — LLM compresses hundreds of raw entries into a handful of actionable rules
  3. Converts negatives to positives — Generates positiveRule alongside each principle: "don't do X" becomes "do Y instead." Research shows LLMs follow affirmative instructions significantly better than prohibitions (Pink Elephant problem)
  4. Compresses config into themes — LLM groups scattered settings into coherent summaries, preserving facts like ports and paths
  5. Injects only what matters — Consolidated wisdom + active settings only. No template bloat, no duplicate entries.
  6. Semantic memory with vector search — Gemini embeddings power meaning-based retrieval across short/medium/long-term memory tiers. Frequently accessed memories auto-promote to highest intensity.

Fully automated via Claude Code hooks — zero configuration after setup.

Real-world impact

From the author's daily use across 8 production projects (with cross-project memory sharing between them):

1,581 "dont" entries   →  5-9 principles per project    (LLM consolidation)
  each with positiveRule  →  affirmative-only injection  (Pink Elephant fix)
29 config entries      →  4-5 thematic summaries        (LLM consolidation)
21,800 chars raw data  →  6,200 chars injected           (71% reduction)

Demo

What happens behind the scenes
  1. Session 1: Claude uses port 3000 — user corrects it to 8080
  2. Stop Hook: wasurenagusa auto-analyzes the conversation and records the mistake
  3. Session 2: Claude correctly uses port 8080 without being told

Why wasurenagusa

Most memory tools store what happened. wasurenagusa teaches your AI why things went wrong — and ensures it never repeats the same mistake.

It's not a memory bank. It's a learning system.

wasurenagusa claude-mem mcp-memory-service CLAUDE.md
Auto-detect mistakes Yes (retry + sentiment) No No No
Auto-consolidate (LLM) Yes (dont→principles, config→themes) No Yes (decay-based) No
Vector semantic search Yes (Gemini embeddings, 768-dim) Yes (ChromaDB) Yes (SQLite-vec / ChromaDB) No
Memory tiers (short/mid/long) Yes (cosine distance thresholds) No No No
Auto-promotion (intensity) Yes (access count → intensity 5) No No No
Zero-effort via hooks Yes Yes Partial No
Human-readable storage Yes (Markdown + JSON vectors) No (SQLite) No (SQLite-vec) Yes
Multi-LLM support Gemini / OpenAI / Anthropic Claude only Local (MiniLM-L6-v2) N/A
Token-efficient retrieval Yes (index → detail, 70-90% savings) Yes (3-layer) N/A No
Cross-project memory Yes (top 5 active projects) No No No
License MIT AGPL-3.0 Apache-2.0 N/A

How It Works

Session Start (Hook) — injection mode
  → Checks if consolidation is stale
  → Spawns background LLM worker if needed (non-blocking)
  → Spawns background embedding backfill worker (non-blocking)
  → Injects consolidated config + principles (layer 1) + recent 30-day entries (layer 2) + owner profile
  → Vector search injects semantically related short-term memories (layer 3)
  → Cross-project vector search injects related memories from other active projects (layer 4)
  → Only customized settings injected (defaults stripped)

Session Start (Hook) — agent mode
  → Injects dont summary + config index + owner profile (minimal footprint)
  → No vector search at startup (deferred to on-demand recall)

User Prompt (Hook) — agent mode
  → Injects 1-line reminder: "search memory if relevant"
  → Main agent spawns memory-reca

Environment Variables

GEMINI_API_KEYrequiredAPI key for Gemini to perform analysis and embedding tasks

Try it

Review my project's memory and summarize the current coding principles.
Check if there are any recorded mistakes regarding the project's port configuration.
Search my memory for previous decisions made about the database schema.
Consolidate recent session notes into actionable project rules.

Frequently Asked Questions

What are the key features of Wasurenagusa?

Automatically detects and records retry patterns and user corrections. Distills raw conversation history into actionable positive principles. Compresses project configurations into thematic summaries. Uses Gemini embeddings for semantic memory retrieval. Supports cross-project memory sharing for active projects.

What can I use Wasurenagusa for?

Maintaining consistent project conventions across multiple coding sessions. Preventing the AI from repeating the same configuration errors like incorrect ports or paths. Reducing context window bloat by injecting only relevant, consolidated project wisdom. Ensuring AI agents follow affirmative instructions instead of negative prohibitions.

How do I install Wasurenagusa?

Install Wasurenagusa by running: npx -y wasurenagusa-mcp

What MCP clients work with Wasurenagusa?

Wasurenagusa works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Wasurenagusa docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Open Conare