General purpose toolbox for AI applications
agent-friend
Bloated MCP schemas degrade tool selection accuracy by 3x — and burn tokens before your agent does anything useful. Scalekit's benchmark: accuracy drops from 43% to 14% with verbose schemas. The average MCP server wastes 2,500+ tokens on descriptions alone.
pip install agent-friend
agent-friend fix server.json > server_fixed.json
GitHub's official MCP: 20,444 tokens → ~14,000. Same tools. More accurate. No config.
Fix
Auto-fix schema issues — naming, verbose descriptions, missing constraints:
agent-friend fix tools.json > tools_fixed.json
# agent-friend fix v0.59.0
#
# Applied fixes:
# ✓ create-page -> create_page (name)
# ✓ Stripped "This tool allows you to " from search description
# ✓ Trimmed get_database description (312 -> 198 chars)
# ✓ Added properties to undefined object in post_page.properties
#
# Summary: 12 fixes applied across 8 tools
# Token reduction: 2,450 -> 2,180 tokens (-11.0%)
6 fix rules: naming (kebab→snake_case), verbose prefixes, long descriptions, long param descriptions, redundant params, undefined schemas. Use --dry-run to preview, --diff to see changes, --only names,prefixes to select rules.
Grade
See how your server scores against 201 others (A+ through F):
agent-friend grade --example notion
# Overall Grade: F
# Score: 19.8/100
# Tools: 22 | Tokens: 4483
Notion's official MCP server. 22 tools. Grade F. Every tool name violates MCP naming conventions. 5 undefined schemas.
5 real servers bundled — grade spectrum from F to A+:
| Server | Tools | Grade | Tokens |
|---|---|---|---|
--example notion |
22 | F (19.8) | 4,483 |
--example filesystem |
11 | D+ (64.9) | 1,392 |
--example github |
12 | C+ (79.6) | 1,824 |
--example puppeteer |
7 | A- (91.2) | 382 |
--example slack |
8 | A+ (97.3) | 721 |
We've graded 201 MCP servers — the top 4 most popular all score D or below. 3,991 tools, 512K tokens analyzed.
Try it live: See Notion's F grade — paste your own schema, get A–F instantly.
Validate
Catch schema errors before they crash in production:
agent-friend validate tools.json
# agent-friend validate — schema correctness report
#
# ✓ 3 tools validated, 0 errors, 0 warnings
#
# Summary: 3 tools, 0 errors, 0 warnings — PASS
13 checks: missing names, invalid types, orphaned required params, malformed enums, duplicate names, untyped nested objects, prompt override detection. Use --strict to treat warnings as errors, --json for CI.
Or use the free web validator — no install needed.
Audit
See exactly where your tokens are going:
agent-friend audit tools.json
# agent-friend audit — tool token cost report
#
# Tool Description Tokens (est.)
# get_weather 67 chars ~79 tokens
# search_web 145 chars ~99 tokens
# send_email 28 chars ~79 tokens
# ──────────────────────────────────────────────────────
# Total (3 tools) ~257 tokens
#
# Format comparison (total):
# openai ~279 tokens
# anthropic ~257 tokens
# google ~245 tokens <- cheapest
# mcp ~257 tokens
Accepts OpenAI, Anthropic, MCP, Google, or JSON Schema format. Auto-detects.
The quality pipeline: validate (correct?) → audit (expensive?) → optimize (suggestions) → fix (auto-repair) → grade (report card).
Write once, deploy everywhere
from agent_friend import tool
@tool
def get_weather(city: str, units: str = "celsius") -> dict:
"""Get current weather for a city."""
return {"city": city, "temp": 22, "units": units}
get_weather.to_openai() # OpenAI function calling
get_weather.to_anthropic() # Claude tool_use
get_weather.to_google() # Gemini
get_weather.to_mcp() # Model Context Protocol
get_weather.to_json_schema() # Raw JSON Schema
One function definition. Five framework formats. No vendor loc
Configuration
{"mcpServers": {"agent-friend": {"command": "agent-friend"}}}