The AI Agent Cost Intelligence Platform
Metrx MCP Server
Your AI agents are wasting money. Metrx finds out how much, and fixes it.
The official MCP server for Metrx — the AI Agent Cost Intelligence Platform. Give any MCP-compatible agent (Claude, GPT, Gemini, Cursor, Windsurf) the ability to track its own costs, detect waste, optimize model selection, and prove ROI.
Why Metrx?
| Problem | What Metrx Does |
|---|---|
| No visibility into agent spend | Real-time cost dashboards per agent, model, and provider |
| Overpaying for LLM calls | Provider arbitrage finds cheaper models for the same task |
| Runaway costs | Budget enforcement with auto-pause when limits are hit |
| Wasted tokens | Cost leak scanner detects retry storms, context bloat, model mismatch |
| Can't prove AI ROI | Revenue attribution links agent actions to business outcomes |
Quick Start
Try it now — no signup required
npx @metrxbot/mcp-server --demo
This starts the server with sample data so you can explore all 23 tools instantly.
Connect your real data
Option A — Interactive login (recommended):
npx @metrxbot/mcp-server --auth
Opens your browser to get an API key, validates it, and saves it to ~/.metrxrc so you never need to set env vars.
Option B — Environment variable:
METRX_API_KEY=sk_live_your_key_here npx @metrxbot/mcp-server --test
Get your free API key at app.metrxbot.com/sign-up.
Add to your MCP client (Claude Desktop, Cursor, Windsurf)
If you used --auth, no env block is needed — the key is read from ~/.metrxrc automatically:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"]
}
}
}
Or pass the key explicitly via environment:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"],
"env": {
"METRX_API_KEY": "sk_live_your_key_here"
}
}
}
}
Remote HTTP endpoint
For remote agents (no local install needed):
POST https://metrxbot.com/api/mcp
Authorization: Bearer sk_live_your_key_here
Content-Type: application/json
From npm
npm install @metrxbot/mcp-server
23 Tools Across 10 Domains
Dashboard (3 tools)
| Tool | Description |
|---|---|
metrx_get_cost_summary |
Comprehensive cost summary — total spend, call counts, error rates, and optimization opportunities |
metrx_list_agents |
List all agents with status, category, cost metrics, and health indicators |
metrx_get_agent_detail |
Detailed agent info including model, framework, cost breakdown, and performance history |
Optimization (4 tools)
| Tool | Description |
|---|---|
metrx_get_optimization_recommendations |
AI-powered cost optimization recommendations per agent or fleet-wide |
metrx_apply_optimization |
One-click apply an optimization recommendation to an agent |
metrx_route_model |
Model routing recommendation for a specific task based on complexity |
metrx_compare_models |
Compare LLM model pricing and capabilities across providers |
Budgets (3 tools)
| Tool | Description |
|---|---|
metrx_get_budget_status |
Current status of all budget configurations with spend vs. limits |
metrx_set_budget |
Create or update a budget with hard, soft, or monitor enforcement |
metrx_update_budget_mode |
Change enforcement mode of an existing budget or pause/resume it |
Alerts (3 tools)
| Tool | Description |
|---|---|
metrx_get_alerts |
Active alerts and notifications for your agent fleet |
metrx_acknowledge_alert |
Mark one or more alerts as read/acknowledged |
metrx_get_failure_predictions |
Predictive failure analysis — identify agents likely to fail before it happens |
Experiments (3 tools)
| Tool | Description |
|---|---|
metrx_create_model_experiment |
Start an A/B test comparing two LLM models with traffic splitting |
| `metrx_get_experiment_ |
Tools (14)
metrx_get_cost_summaryProvides a comprehensive cost summary including total spend, call counts, error rates, and optimization opportunities.metrx_list_agentsLists all agents with their status, category, cost metrics, and health indicators.metrx_get_agent_detailRetrieves detailed agent info including model, framework, cost breakdown, and performance history.metrx_get_optimization_recommendationsProvides AI-powered cost optimization recommendations per agent or fleet-wide.metrx_apply_optimizationApplies an optimization recommendation to an agent.metrx_route_modelProvides a model routing recommendation for a specific task based on complexity.metrx_compare_modelsCompares LLM model pricing and capabilities across different providers.metrx_get_budget_statusShows the current status of all budget configurations with spend vs. limits.metrx_set_budgetCreates or updates a budget with hard, soft, or monitor enforcement.metrx_update_budget_modeChanges the enforcement mode of an existing budget or pauses/resumes it.metrx_get_alertsRetrieves active alerts and notifications for your agent fleet.metrx_acknowledge_alertMarks one or more alerts as read or acknowledged.metrx_get_failure_predictionsPerforms predictive failure analysis to identify agents likely to fail.metrx_create_model_experimentStarts an A/B test comparing two LLM models with traffic splitting.Environment Variables
METRX_API_KEYrequiredAPI key for authenticating with the Metrx platform.Configuration
{"mcpServers": {"metrx": {"command": "npx", "args": ["@metrxbot/mcp-server"]}}}