LLM Advisor MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add llm-advisor -- npx -y llm-advisor-mcp
README.md

Give your AI assistant real-time LLM/VLM knowledge.

llm-advisor-mcp

English | 日本語

Give your AI assistant real-time LLM/VLM knowledge. Pricing, benchmarks, and recommendations — updated every hour, not every training cycle.

LLMs have knowledge cutoffs. Ask Claude "what's the best coding model right now?" and it cannot answer with current data. This MCP server fixes that by feeding live model intelligence directly into your AI assistant's context window.

  • Zero config — No API keys, no registration. One command to install.
  • Low token — Compact Markdown tables (~300 tokens), not raw JSON (~3,000 tokens). Your context window matters.
  • 5 benchmark sources — SWE-bench, LM Arena Elo, OpenCompass VLM, Aider Polyglot, and OpenRouter pricing merged into one unified view.

Use Cases

  • "What's the best coding model right now?"list_top_models with category coding
  • "Compare Claude vs GPT vs Gemini"compare_models with side-by-side table
  • "Find a cheap model with 1M context"recommend_model with budget constraints
  • "What benchmarks does model X have?"get_model_info with percentile ranks

Quick Start

Claude Code

claude mcp add llm-advisor -- npx -y llm-advisor-mcp

Claude Code (Windows)

claude mcp add llm-advisor -- cmd /c npx -y llm-advisor-mcp

Claude Desktop / Cursor / Windsurf

Add to your MCP configuration file:

{
  "mcpServers": {
    "llm-advisor": {
      "command": "npx",
      "args": ["-y", "llm-advisor-mcp"]
    }
  }
}

That is it. No API keys, no .env files.

Compatible Clients

Client Supported Install Method
Claude Code Yes claude mcp add
Claude Desktop Yes JSON config
Cursor Yes JSON config
Windsurf Yes JSON config
Any MCP client Yes stdio transport

Tools

`get_model_info`

Detailed specs for a specific model: pricing, benchmarks, percentile ranks, capabilities, and a ready-to-use API code example.

Parameters

Name Type Required Default Description
model string Yes Model ID or partial name (e.g. "claude-sonnet-4", "gpt-5")
include_api_example boolean No true Include a ready-to-use code snippet
api_format enum No openai_sdk openai_sdk, curl, or python_requests

Example output

## anthropic/claude-sonnet-4

**Provider**: anthropic | **Modality**: text+image→text | **Released**: 2025-06-25

### Pricing
| Metric | Value |
|--------|-------|
| Input | $3.00 /1M tok |
| Output | $15.00 /1M tok |
| Cache Read | $0.30 /1M tok |
| Context | 200K |
| Max Output | 64K |

### Benchmarks
| Benchmark | Score |
|-----------|-------|
| SWE-bench Verified | 76.8% |
| Aider Polyglot | 72.1% |
| Arena Elo | 1467 |
| MMMU | 76.0% |

### Percentile Ranks
| Category | Percentile |
|----------|------------|
| Coding | P96 |
| General | P95 |
| Vision | P90 |

**Capabilities**: Tools, Reasoning, Vision

### API Example (openai_sdk)
```python
from openai import OpenAI
client = OpenAI(
    base_url="https://openrouter.ai/api/v1",
    api_key="<OPENROUTER_API_KEY>",
)
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4",
    messages=[{"role": "user", "content": "Hello"}],
)

---

### `list_top_models`

Top-ranked models for a category. Includes release dates for freshness awareness.

**Parameters**

| Name | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| `category` | enum | Yes | — | `coding`, `math`, `vision`, `general`, `cost-effective`, `open-source`, `speed`, `context-window`, `reasoning` |
| `limit` | number | No | `10` | Number of results (1-20) |
| `min_context` | number | No | — | Minimum context window in tokens |
| `min_release_date` | string | No | — | `YYYY-MM-DD`. Excludes models released before this date |

**Example output**

Top 5: coding

# Model Key Score Input $/1M Output $/1M Context Released
1 openai/o3-pro SWE 79.5% $20.00 $80.00 200K 2025-06-10
2 anthropic/claude-sonnet-4 SWE 76.8% $3.

Tools (2)

get_model_infoProvides detailed specs for a specific model including pricing, benchmarks, percentile ranks, and capabilities.
list_top_modelsLists top-ranked models for a specific category with release dates.

Configuration

claude_desktop_config.json
{"mcpServers": {"llm-advisor": {"command": "npx", "args": ["-y", "llm-advisor-mcp"]}}}

Try it

What is the best coding model available right now?
Compare the pricing and performance of Claude, GPT-4, and Gemini models.
Find me a cost-effective model that supports at least 128k context window.
What are the benchmark scores for Claude 3.5 Sonnet?

Frequently Asked Questions

What are the key features of LLM Advisor?

Provides real-time LLM and VLM intelligence without training cycle delays.. Aggregates data from 5 benchmark sources including SWE-bench and LM Arena Elo.. Zero configuration required with no API keys or registration.. Delivers compact Markdown tables to minimize token usage in context windows..

What can I use LLM Advisor for?

Identifying the top-performing coding models for current development tasks.. Comparing model capabilities and pricing side-by-side for cost optimization.. Finding specific models that meet technical constraints like context window size.. Retrieving up-to-date benchmark rankings for model selection..

How do I install LLM Advisor?

Install LLM Advisor by running: claude mcp add llm-advisor -- npx -y llm-advisor-mcp

What MCP clients work with LLM Advisor?

LLM Advisor works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep LLM Advisor docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare