Legal Contract Review Agent MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
pip install .
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add -e "OPENAI_API_KEY=${OPENAI_API_KEY}" legal-contract-review -- node "<FULL_PATH_TO_LEGAL_AI_AGENT_PROTOTYPE>/dist/index.js"

Replace <FULL_PATH_TO_LEGAL_AI_AGENT_PROTOTYPE>/dist/index.js with the actual folder you prepared in step 1.

Required:OPENAI_API_KEY
README.md

AI-powered Japanese legal contract review agent system

Legal Contract Review Agent

AI-powered Japanese legal contract review agent system built with LangGraph, RAG, MCP, and Tool Calling.

中文文档 | 日本語ドキュメント

Demo

Demo Screenshot

Architecture

┌─────────────┐    ┌──────────────────────────────────────────┐
│  React UI   │───▶│  FastAPI Backend                         │
└─────────────┘    │                                          │
                   │  LangGraph Agent Workflow:                │
┌─────────────┐    │  parse_contract → analyze_risks          │
│ Claude       │    │  → generate_report                       │
│ Desktop     │───▶│                                          │
│ (MCP Client)│    │  Tools: analyze_clause_risk (RAG inside) │
└─────────────┘    │         generate_suggestion (LLM inside) │
                   │                                          │
                   │  RAG: ChromaDB + OpenAI Embeddings       │
                   └──────────────────────────────────────────┘

Tech Stack

  • LLM: OpenAI GPT-4o
  • Agent Framework: LangGraph (StateGraph)
  • RAG: ChromaDB + text-embedding-3-small
  • MCP: FastMCP (Python)
  • Backend: FastAPI
  • Frontend: React + Vite + TypeScript
  • Deployment: Docker Compose
  • Text Splitting: langchain-text-splitters for document chunking

Quick Start

Prerequisites

  • Docker & Docker Compose
  • OpenAI API Key

Setup & Run

cd legal-contract-agent

# Create .env from template and add your OpenAI API Key
cp .env.example .env
# Edit .env: OPENAI_API_KEY=sk-your-key-here

# Build and start all services
docker compose up --build

Open http://localhost:5173 — paste a Japanese contract and click "契約書を審査する".

To stop:

docker compose down        # Stop containers
docker compose down -v     # Stop and remove data volumes

Run Without Docker (Alternative)

# Install Python dependencies
pip install .

# Install frontend dependencies
cd frontend && npm install && cd ..

# Terminal 1: Start backend
uvicorn backend.main:app --reload

# Terminal 2: Start frontend
cd frontend && npm run dev

MCP Server (for Claude Desktop)

python -m backend.mcp.server

Add to Claude Desktop config:

{
  "mcpServers": {
    "legal-review": {
      "command": "python",
      "args": ["-m", "backend.mcp.server"],
      "cwd": "/path/to/legal-contract-agent"
    }
  }
}

Key Design Decisions

  • LangGraph over simple chain: Supports conditional branching, state management, and is extensible for multi-agent collaboration
  • RAG: Grounds agent responses in reliable legal knowledge rather than relying solely on LLM memory
  • MCP: Standardized AI tool protocol enabling any client (Claude Desktop, etc.) to invoke contract review capabilities
  • Tool Calling: Agent autonomously decides when to invoke which tool, demonstrating autonomous decision-making
  • TXT Chunking: Long .txt documents are split by RecursiveCharacterTextSplitter (chunk_size=200, overlap=40) and stored alongside JSON knowledge. Both are retrieved uniformly by store.search().
  • Contracts are query-only: User contract text is never stored in the vector database — only the curated knowledge base is indexed.

RAG Evaluation

The project includes a built-in eval module to measure the retrieval quality of the RAG pipeline.

What it evaluates

The analyze_clause_risk tool relies on ChromaDB search to retrieve relevant legal knowledge for each contract clause. The eval module measures how well this retrieval performs.

Metrics

Metric Description
Recall@K Fraction of relevant documents found in the top-K results
MRR Mean Reciprocal Rank — average of 1/rank of the first relevant result

Dataset

5 hand-labeled evaluation samples in backend/data/eval_dataset.json, covering typical contract risk scenarios:

ID Scenario
eval_001 Unlimited liability clause
eval_002 Excessive non-compete period (5 years)
eval_003 Unilateral termination right
eval_004 IP / copyright assignment
eval_005 NDA with no time limit

Each sample contains the query text and the expected relevant document IDs from legal_knowledge.json.

Run the eval

# Start backend first
docker compose up --build backend

# Run evaluation with default k=3
curl http://localhost:8000/api/eval/rag

# Run with custom k
curl "http://localhost:8000/api/eval/rag?k=5"

Example response

{
  "k": 3,
  "num_samples": 5,
  "mean_recall_at_k": 0.72,
  "mrr": 0.85,
  "per_sample": [
    {
      "id": "eval_001",
      "description": "損害賠償無制限条項",
      "recall_at_k": 0.667,
      "reciprocal_rank": 1.0,
      "retrieved_ids": ["civil_code_415", "risk_liability_unlimited", "civil_code_416"],
      "relevant_ids": ["civil_code_415", "civil_code_416", "risk_liability_unlimited"]
    }

Tools (2)

analyze_clause_riskAnalyzes a specific contract clause for legal risks using RAG-based knowledge retrieval.
generate_suggestionGenerates improvement suggestions for a contract clause based on legal standards.

Environment Variables

OPENAI_API_KEYrequiredAPI key for accessing OpenAI GPT-4o and embedding models.

Configuration

claude_desktop_config.json
{"mcpServers": {"legal-review": {"command": "python", "args": ["-m", "backend.mcp.server"], "cwd": "/path/to/legal-contract-agent"}}}

Try it

Analyze the following Japanese contract clause for potential legal risks: [paste clause here]
Generate a suggestion to improve this liability limitation clause to be more favorable to the client.
Review this contract section and identify if there are any unilateral termination rights that pose a risk.

Frequently Asked Questions

What are the key features of Legal Contract Review Agent?

RAG-enhanced legal knowledge retrieval using ChromaDB. Automated risk analysis for Japanese legal contracts. Conditional branching and state management via LangGraph. Autonomous tool calling for clause risk assessment and suggestion generation. Document chunking using RecursiveCharacterTextSplitter.

What can I use Legal Contract Review Agent for?

Reviewing Japanese commercial contracts for hidden liability risks. Identifying non-compliant clauses in NDAs or service agreements. Generating standardized legal suggestions for contract revisions. Automating the initial screening process for legal departments.

How do I install Legal Contract Review Agent?

Install Legal Contract Review Agent by running: pip install .

What MCP clients work with Legal Contract Review Agent?

Legal Contract Review Agent works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Legal Contract Review Agent docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare