ARSR MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add -e "ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}" mcp-arsr -- npx @jayarrowz/mcp-arsr
Required:ANTHROPIC_API_KEY+ 5 optional
README.md

Adaptive Retrieval-Augmented Self-Refinement for LLMs

ARSR MCP Server

Adaptive Retrieval-Augmented Self-Refinement — a closed-loop MCP server that lets LLMs iteratively verify and correct their own claims using uncertainty-guided retrieval.

What it does

Unlike one-shot RAG (retrieve → generate), ARSR runs a refinement loop:

Generate draft → Decompose claims → Score uncertainty
       ↑                                    ↓
   Decide stop ← Revise with evidence ← Retrieve for low-confidence claims

The key insight: retrieval is guided by uncertainty. Only claims the model is unsure about trigger evidence fetching, and the queries are adversarial — designed to disprove the claim, not just confirm it.

Architecture

The server exposes 6 MCP tools. The outer LLM (Claude, GPT, etc.) orchestrates the loop by calling them in sequence:

# Tool Purpose
1 arsr_draft_response Generate initial candidate answer (returns is_refusal flag)
2 arsr_decompose_claims Split into atomic verifiable claims
3 arsr_score_uncertainty Estimate confidence via semantic entropy
4 arsr_retrieve_evidence Web search for low-confidence claims
5 arsr_revise_response Rewrite draft with evidence
6 arsr_should_continue Decide: iterate or finalize

Inner LLM: Tools 1-5 use Claude Haiku internally for intelligence (query generation, claim extraction, evidence evaluation). This keeps costs low while the outer model handles orchestration.

Refusal detection: arsr_draft_response returns a structured is_refusal flag (classified by the inner LLM) indicating whether the draft is a non-answer. When is_refusal is true, downstream tools (decompose, revise) pivot to extracting claims from the original query and building an answer from retrieved evidence instead of trying to refine a refusal.

Web Search: arsr_retrieve_evidence uses the Anthropic API's built-in web search tool — no external search API keys needed.

Setup

Prerequisites

  • Node.js 18+
  • An Anthropic API key

Install & Build

cd arsr-mcp-server
npm install
npm run build

Environment

export ANTHROPIC_API_KEY="sk-ant-..."

Run

stdio mode (for Claude Desktop, Cursor, etc.):

npm start

HTTP mode (for remote access):

TRANSPORT=http PORT=3001 npm start

Claude Desktop Configuration

Add to your claude_desktop_config.json:

Npm:

{
  "mcpServers": {
    "arsr": {
      "command": "npx",
      "args": ["@jayarrowz/mcp-arsr"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-...",
        "ARSR_MAX_ITERATIONS": "3",
        "ARSR_ENTROPY_SAMPLES": "3",
        "ARSR_RETRIEVAL_STRATEGY": "adversarial",
        "ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
      }
    }
  }
}

Local build:

{
  "mcpServers": {
    "arsr": {
      "command": "node",
      "args": ["/path/to/arsr-mcp-server/dist/src/index.js"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-...",
        "ARSR_MAX_ITERATIONS": "3",
        "ARSR_ENTROPY_SAMPLES": "3",
        "ARSR_RETRIEVAL_STRATEGY": "adversarial",
        "ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"
      }
    }
  }
}

How the outer LLM uses it

The orchestrating LLM calls the tools in sequence:

1. draft = arsr_draft_response({ query: "When was Tesla founded?" })
   // draft.is_refusal indicates if the inner LLM refused to answer
2. claims = arsr_decompose_claims({ draft: draft.draft, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
3. scored = arsr_score_uncertainty({ claims: claims.claims })
4. low = scored.scored.filter(c => c.confidence < 0.85)
5. evidence = arsr_retrieve_evidence({ claims_to_check: low })
6. revised = arsr_revise_response({ draft: draft.draft, evidence: evidence.evidence, scored: scored.scored, original_query: "When was Tesla founded?", is_refusal: draft.is_refusal })
7. decision = arsr_should_continue({ iteration: 1, scored: revised_scores })
   → if "continue": go to step 2 with revised text
   → if "stop": return revised.revised to user

Configuration

All settings can be overridden via environment variables, falling back to defaults if unset:

Setting Env var Default Description
max_iterations ARSR_MAX_ITERATIONS 3 Budget limit for refinement loops
confidence_threshold ARSR_CONFIDENCE_THRESHOLD 0.85 Claims above this skip retrieval
entropy_samples ARSR_ENTROPY_SAMPLES 3 Rephrasings for semantic entropy
retrieval_strategy ARSR_RETRIEVAL_STRATEGY adversarial adversarial, confirmatory, or balanced
inner_model ARSR_INNER_MODEL claude-haiku-4-5-20251001 Model for internal intelligence

Cost estimate

Per refinement loop iteration (assuming ~5 claims, 3 low-confidence):

  • Inner LLM calls: ~6-10 Haiku calls ≈ $0.002-0.005
  • Web searches: 6-9 queries ≈ included in

Tools (6)

arsr_draft_responseGenerate initial candidate answer and return is_refusal flag.
arsr_decompose_claimsSplit response into atomic verifiable claims.
arsr_score_uncertaintyEstimate confidence via semantic entropy.
arsr_retrieve_evidenceWeb search for low-confidence claims.
arsr_revise_responseRewrite draft with evidence.
arsr_should_continueDecide whether to iterate or finalize.

Environment Variables

ANTHROPIC_API_KEYrequiredAPI key for Anthropic services
ARSR_MAX_ITERATIONSBudget limit for refinement loops
ARSR_CONFIDENCE_THRESHOLDClaims above this skip retrieval
ARSR_ENTROPY_SAMPLESRephrasings for semantic entropy
ARSR_RETRIEVAL_STRATEGYStrategy for retrieval (adversarial, confirmatory, or balanced)
ARSR_INNER_MODELModel for internal intelligence

Configuration

claude_desktop_config.json
{"mcpServers": {"arsr": {"command": "npx", "args": ["@jayarrowz/mcp-arsr"], "env": {"ANTHROPIC_API_KEY": "sk-ant-...", "ARSR_MAX_ITERATIONS": "3", "ARSR_ENTROPY_SAMPLES": "3", "ARSR_RETRIEVAL_STRATEGY": "adversarial", "ARSR_INNER_MODEL": "claude-haiku-4-5-20251001"}}}}

Try it

Use the ARSR server to answer: 'What are the current primary causes of coral bleaching?' and ensure the claims are verified.
Draft a response about the history of the Apollo 11 mission and refine it using the ARSR loop.
Verify the accuracy of this statement: 'The Eiffel Tower was completed in 1889' using the ARSR tools.

Frequently Asked Questions

What are the key features of ARSR MCP Server?

Iterative self-refinement loop for LLM responses. Uncertainty-guided retrieval based on semantic entropy. Adversarial retrieval strategy to disprove claims. Automatic refusal detection and handling. Built-in web search via Anthropic API.

What can I use ARSR MCP Server for?

Fact-checking complex historical or scientific queries. Reducing hallucinations in long-form content generation. Automated research assistance for high-stakes information gathering.

How do I install ARSR MCP Server?

Install ARSR MCP Server by running: npm install && npm run build

What MCP clients work with ARSR MCP Server?

ARSR MCP Server works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep ARSR MCP Server docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare