wasurenagusa
Teach your AI coding agent to learn from its mistakes.
wasurenagusa (forget-me-not) — a Japanese flower whose name means "don't forget me."
The Problem
AI coding agents are powerful but amnesiac. Every session starts from scratch — your project conventions, past decisions, and hard-learned lessons vanish the moment a session ends.
Existing solutions either require manual effort or simply store raw memories that grow until they overwhelm the context window.
The Solution
wasurenagusa is an MCP server that doesn't just remember — it learns.
- Detects mistakes automatically — Catches retry patterns, user frustration, and repeated failures
- Distills lessons into principles — LLM compresses hundreds of raw entries into a handful of actionable rules
- Converts negatives to positives — Generates
positiveRulealongside each principle: "don't do X" becomes "do Y instead." Research shows LLMs follow affirmative instructions significantly better than prohibitions (Pink Elephant problem) - Compresses config into themes — LLM groups scattered settings into coherent summaries, preserving facts like ports and paths
- Injects only what matters — Consolidated wisdom + active settings only. No template bloat, no duplicate entries.
- Semantic memory with vector search — Gemini embeddings power meaning-based retrieval across short/medium/long-term memory tiers. Frequently accessed memories auto-promote to highest intensity.
Fully automated via Claude Code hooks — zero configuration after setup.
Real-world impact
From the author's daily use across 8 production projects (with cross-project memory sharing between them):
1,581 "dont" entries → 5-9 principles per project (LLM consolidation)
each with positiveRule → affirmative-only injection (Pink Elephant fix)
29 config entries → 4-5 thematic summaries (LLM consolidation)
21,800 chars raw data → 6,200 chars injected (71% reduction)
Demo
What happens behind the scenes
- Session 1: Claude uses port 3000 — user corrects it to 8080
- Stop Hook: wasurenagusa auto-analyzes the conversation and records the mistake
- Session 2: Claude correctly uses port 8080 without being told
Why wasurenagusa
Most memory tools store what happened. wasurenagusa teaches your AI why things went wrong — and ensures it never repeats the same mistake.
It's not a memory bank. It's a learning system.
| wasurenagusa | claude-mem | mcp-memory-service | CLAUDE.md | |
|---|---|---|---|---|
| Auto-detect mistakes | Yes (retry + sentiment) | No | No | No |
| Auto-consolidate (LLM) | Yes (dont→principles, config→themes) | No | Yes (decay-based) | No |
| Vector semantic search | Yes (Gemini embeddings, 768-dim) | Yes (ChromaDB) | Yes (SQLite-vec / ChromaDB) | No |
| Memory tiers (short/mid/long) | Yes (cosine distance thresholds) | No | No | No |
| Auto-promotion (intensity) | Yes (access count → intensity 5) | No | No | No |
| Zero-effort via hooks | Yes | Yes | Partial | No |
| Human-readable storage | Yes (Markdown + JSON vectors) | No (SQLite) | No (SQLite-vec) | Yes |
| Multi-LLM support | Gemini / OpenAI / Anthropic | Claude only | Local (MiniLM-L6-v2) | N/A |
| Token-efficient retrieval | Yes (index → detail, 70-90% savings) | Yes (3-layer) | N/A | No |
| Cross-project memory | Yes (top 5 active projects) | No | No | No |
| License | MIT | AGPL-3.0 | Apache-2.0 | N/A |
How It Works
Session Start (Hook) — injection mode
→ Checks if consolidation is stale
→ Spawns background LLM worker if needed (non-blocking)
→ Spawns background embedding backfill worker (non-blocking)
→ Injects consolidated config + principles (layer 1) + recent 30-day entries (layer 2) + owner profile
→ Vector search injects semantically related short-term memories (layer 3)
→ Cross-project vector search injects related memories from other active projects (layer 4)
→ Only customized settings injected (defaults stripped)
Session Start (Hook) — agent mode
→ Injects dont summary + config index + owner profile (minimal footprint)
→ No vector search at startup (deferred to on-demand recall)
User Prompt (Hook) — agent mode
→ Injects 1-line reminder: "search memory if relevant"
→ Main agent spawns memory-reca
Environment Variables
GEMINI_API_KEYrequiredAPI key for Gemini to perform analysis and embedding tasks