Give your AI assistant real-time LLM/VLM knowledge.
llm-advisor-mcp
English | 日本語
Give your AI assistant real-time LLM/VLM knowledge. Pricing, benchmarks, and recommendations — updated every hour, not every training cycle.
LLMs have knowledge cutoffs. Ask Claude "what's the best coding model right now?" and it cannot answer with current data. This MCP server fixes that by feeding live model intelligence directly into your AI assistant's context window.
- Zero config — No API keys, no registration. One command to install.
- Low token — Compact Markdown tables (~300 tokens), not raw JSON (~3,000 tokens). Your context window matters.
- 5 benchmark sources — SWE-bench, LM Arena Elo, OpenCompass VLM, Aider Polyglot, and OpenRouter pricing merged into one unified view.
Use Cases
- "What's the best coding model right now?" —
list_top_modelswith categorycoding - "Compare Claude vs GPT vs Gemini" —
compare_modelswith side-by-side table - "Find a cheap model with 1M context" —
recommend_modelwith budget constraints - "What benchmarks does model X have?" —
get_model_infowith percentile ranks
Quick Start
Claude Code
claude mcp add llm-advisor -- npx -y llm-advisor-mcp
Claude Code (Windows)
claude mcp add llm-advisor -- cmd /c npx -y llm-advisor-mcp
Claude Desktop / Cursor / Windsurf
Add to your MCP configuration file:
{
"mcpServers": {
"llm-advisor": {
"command": "npx",
"args": ["-y", "llm-advisor-mcp"]
}
}
}
That is it. No API keys, no .env files.
Compatible Clients
| Client | Supported | Install Method |
|---|---|---|
| Claude Code | Yes | claude mcp add |
| Claude Desktop | Yes | JSON config |
| Cursor | Yes | JSON config |
| Windsurf | Yes | JSON config |
| Any MCP client | Yes | stdio transport |
Tools
`get_model_info`
Detailed specs for a specific model: pricing, benchmarks, percentile ranks, capabilities, and a ready-to-use API code example.
Parameters
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
model |
string | Yes | — | Model ID or partial name (e.g. "claude-sonnet-4", "gpt-5") |
include_api_example |
boolean | No | true |
Include a ready-to-use code snippet |
api_format |
enum | No | openai_sdk |
openai_sdk, curl, or python_requests |
Example output
## anthropic/claude-sonnet-4
**Provider**: anthropic | **Modality**: text+image→text | **Released**: 2025-06-25
### Pricing
| Metric | Value |
|--------|-------|
| Input | $3.00 /1M tok |
| Output | $15.00 /1M tok |
| Cache Read | $0.30 /1M tok |
| Context | 200K |
| Max Output | 64K |
### Benchmarks
| Benchmark | Score |
|-----------|-------|
| SWE-bench Verified | 76.8% |
| Aider Polyglot | 72.1% |
| Arena Elo | 1467 |
| MMMU | 76.0% |
### Percentile Ranks
| Category | Percentile |
|----------|------------|
| Coding | P96 |
| General | P95 |
| Vision | P90 |
**Capabilities**: Tools, Reasoning, Vision
### API Example (openai_sdk)
```python
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<OPENROUTER_API_KEY>",
)
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4",
messages=[{"role": "user", "content": "Hello"}],
)
---
### `list_top_models`
Top-ranked models for a category. Includes release dates for freshness awareness.
**Parameters**
| Name | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| `category` | enum | Yes | — | `coding`, `math`, `vision`, `general`, `cost-effective`, `open-source`, `speed`, `context-window`, `reasoning` |
| `limit` | number | No | `10` | Number of results (1-20) |
| `min_context` | number | No | — | Minimum context window in tokens |
| `min_release_date` | string | No | — | `YYYY-MM-DD`. Excludes models released before this date |
**Example output**
Top 5: coding
| # | Model | Key Score | Input $/1M | Output $/1M | Context | Released |
|---|---|---|---|---|---|---|
| 1 | openai/o3-pro | SWE 79.5% | $20.00 | $80.00 | 200K | 2025-06-10 |
| 2 | anthropic/claude-sonnet-4 | SWE 76.8% | $3. |
Tools (2)
get_model_infoProvides detailed specs for a specific model including pricing, benchmarks, percentile ranks, and capabilities.list_top_modelsLists top-ranked models for a specific category with release dates.Configuration
{"mcpServers": {"llm-advisor": {"command": "npx", "args": ["-y", "llm-advisor-mcp"]}}}