Agent Friend MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
pip install agent-friend
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add agent-friend -- node "<FULL_PATH_TO_AGENT_FRIEND>/dist/index.js"

Replace <FULL_PATH_TO_AGENT_FRIEND>/dist/index.js with the actual folder you prepared in step 1.

README.md

General purpose toolbox for AI applications

agent-friend

Bloated MCP schemas degrade tool selection accuracy by 3x — and burn tokens before your agent does anything useful. Scalekit's benchmark: accuracy drops from 43% to 14% with verbose schemas. The average MCP server wastes 2,500+ tokens on descriptions alone.

pip install agent-friend
agent-friend fix server.json > server_fixed.json

GitHub's official MCP: 20,444 tokens → ~14,000. Same tools. More accurate. No config.

Fix

Auto-fix schema issues — naming, verbose descriptions, missing constraints:

agent-friend fix tools.json > tools_fixed.json

# agent-friend fix v0.59.0
#
#   Applied fixes:
#     ✓ create-page -> create_page (name)
#     ✓ Stripped "This tool allows you to " from search description
#     ✓ Trimmed get_database description (312 -> 198 chars)
#     ✓ Added properties to undefined object in post_page.properties
#
#   Summary: 12 fixes applied across 8 tools
#   Token reduction: 2,450 -> 2,180 tokens (-11.0%)

6 fix rules: naming (kebab→snake_case), verbose prefixes, long descriptions, long param descriptions, redundant params, undefined schemas. Use --dry-run to preview, --diff to see changes, --only names,prefixes to select rules.

Grade

See how your server scores against 201 others (A+ through F):

agent-friend grade --example notion

# Overall Grade: F
# Score: 19.8/100
# Tools: 22 | Tokens: 4483

Notion's official MCP server. 22 tools. Grade F. Every tool name violates MCP naming conventions. 5 undefined schemas.

5 real servers bundled — grade spectrum from F to A+:

Server Tools Grade Tokens
--example notion 22 F (19.8) 4,483
--example filesystem 11 D+ (64.9) 1,392
--example github 12 C+ (79.6) 1,824
--example puppeteer 7 A- (91.2) 382
--example slack 8 A+ (97.3) 721

We've graded 201 MCP servers — the top 4 most popular all score D or below. 3,991 tools, 512K tokens analyzed.

Try it live: See Notion's F grade — paste your own schema, get A–F instantly.

Validate

Catch schema errors before they crash in production:

agent-friend validate tools.json

# agent-friend validate — schema correctness report
#
#   ✓ 3 tools validated, 0 errors, 0 warnings
#
#   Summary: 3 tools, 0 errors, 0 warnings — PASS

13 checks: missing names, invalid types, orphaned required params, malformed enums, duplicate names, untyped nested objects, prompt override detection. Use --strict to treat warnings as errors, --json for CI.

Or use the free web validator — no install needed.

Audit

See exactly where your tokens are going:

agent-friend audit tools.json

# agent-friend audit — tool token cost report
#
#   Tool                    Description      Tokens (est.)
#   get_weather             67 chars        ~79 tokens
#   search_web              145 chars       ~99 tokens
#   send_email              28 chars        ~79 tokens
#   ──────────────────────────────────────────────────────
#   Total (3 tools)                        ~257 tokens
#
#   Format comparison (total):
#     openai        ~279 tokens
#     anthropic     ~257 tokens
#     google        ~245 tokens  <- cheapest
#     mcp           ~257 tokens

Accepts OpenAI, Anthropic, MCP, Google, or JSON Schema format. Auto-detects.

The quality pipeline: validate (correct?) → audit (expensive?) → optimize (suggestions) → fix (auto-repair) → grade (report card).

Write once, deploy everywhere

from agent_friend import tool

@tool
def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"city": city, "temp": 22, "units": units}

get_weather.to_openai()      # OpenAI function calling
get_weather.to_anthropic()   # Claude tool_use
get_weather.to_google()      # Gemini
get_weather.to_mcp()         # Model Context Protocol
get_weather.to_json_schema() # Raw JSON Schema

One function definition. Five framework formats. No vendor loc

Configuration

claude_desktop_config.json
{"mcpServers": {"agent-friend": {"command": "agent-friend"}}}

Try it

Fix the schema issues in my tools.json file to improve tool selection accuracy.
Grade my current MCP server configuration against industry standards.
Audit my tools.json file to see which tools are consuming the most tokens.
Validate my tools.json file to ensure there are no schema errors before production deployment.

Frequently Asked Questions

What are the key features of Agent Friend?

Auto-fix schema issues like naming, verbose descriptions, and missing constraints. Grade MCP servers against 201 others to identify performance bottlenecks. Validate schema correctness with 13 specific checks. Audit tool token costs across different model formats. Convert Python function definitions into multiple framework formats including MCP.

What can I use Agent Friend for?

Developers looking to reduce token consumption in their MCP servers. Teams needing to improve tool selection accuracy for their AI agents. Engineers validating schema correctness before deploying MCP servers to production. Developers standardizing tool definitions across OpenAI, Anthropic, and Google formats.

How do I install Agent Friend?

Install Agent Friend by running: pip install agent-friend

What MCP clients work with Agent Friend?

Agent Friend works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Agent Friend docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare