FleetQ MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add -e "FLEETQ_API_KEY=${FLEETQ_API_KEY}" fleetq -- npx @escapeboy/agent-fleet-o
Required:FLEETQ_API_KEY
README.md

Self-hosted AI Agent Mission Control platform.

FleetQ - Community Edition

Self-hosted AI Agent Mission Control platform. Build, orchestrate, and monitor AI agent experiments with a visual pipeline, human-in-the-loop approvals, and full audit trail.

Cloud Version

Prefer not to self-host? FleetQ Cloud is the fully managed version — no setup, no infrastructure, free to try.

Screenshots

Dashboard KPI overview with active experiments, success rate, budget spend, and pending approvals.

Agent Template Gallery Browse 14 pre-built agent templates across 5 categories. Search, filter by category, and deploy with one click.

Agent LLM Configuration Per-agent provider and model selection with fallback chains. Supports Anthropic, OpenAI, Google, and local agents.

Agent Evolution AI-driven agent self-improvement. Analyze execution history, propose personality and config changes, and apply with one click.

Crew Execution Live progress tracking during multi-agent crew execution. Each task shows its assigned skill, provider, and elapsed time.

Task Output Expand any completed task to inspect the AI-generated output, including structured JSON responses.

Visual Workflow Builder DAG-based workflow editor with conditional branching, human tasks, switch nodes, and dynamic forks.

Tool Management Manage MCP servers, built-in tools, and external integrations with risk classification and per-agent assignment.

AI Assistant Sidebar Context-aware AI chat embedded in every page with 28 built-in tools for querying and managing the platform.

Experiment Detail Full experiment lifecycle view with timeline, tasks, transitions, artifacts, metrics, and outbound delivery.

Settings & Webhooks Global platform settings, AI provider keys (BYOK), outbound connectors, and webhook configuration.

Error Handling Failed tasks display detailed error information including provider, error type, and request IDs for debugging.

Features

  • Experiment Pipeline -- 20-state machine with automatic stage progression (scoring, planning, building, approval, execution, metrics collection)
  • AI Agents -- Configure agents with roles, goals, backstories, personality traits, and skill assignments
  • Agent Templates -- 14 pre-built templates across 5 categories (engineering, content, business, design, research)
  • Agent Evolution -- AI-driven self-improvement: analyze execution history, propose config changes, and apply improvements
  • Agent Crews -- Multi-agent teams with lead/member roles and shared context
  • Skills -- Reusable AI skill definitions (LLM, connector, rule, hybrid, browser, RunPod, GPU compute) with versioning and cost tracking
  • RunPod GPU Integration -- Invoke RunPod serverless endpoints or manage full GPU pod lifecycles as skills; BYOK API key; spot pricing; cost tracking
  • Pluggable Compute Providers -- gpu_compute skill type backed by RunPod, Replicate, Fal.ai, and Vast.ai; configure via compute_manage MCP tool; zero platform credits
  • Local LLM Support -- Run Ollama or any OpenAI-compatible server (LM Studio, vLLM, llama.cpp) as a provider; 17 preset Ollama models; zero cost; SSRF protection
  • Integrations -- Connect GitHub, Slack, Notion, Airtable, Linear, Stripe, and generic webhooks/polling sources via unified driver interface with OAuth 2.0 support
  • Playbooks -- Sequential o

Tools (1)

compute_manageManage compute providers and GPU resources for agent skills.

Environment Variables

FLEETQ_API_KEYrequiredAPI key for authenticating with the FleetQ platform.

Configuration

claude_desktop_config.json
{"mcpServers": {"fleetq": {"command": "npx", "args": ["@escapeboy/agent-fleet-o"]}}}

Try it

List all currently active AI agent experiments and their success rates.
Create a new agent crew using the 'research' template and assign it to the latest project.
Analyze the execution history of my recent content generation agents and propose configuration improvements.
Trigger a human-in-the-loop approval for the pending task in the current workflow.

Frequently Asked Questions

What are the key features of FleetQ?

Visual DAG-based workflow builder with conditional branching and human-in-the-loop tasks.. Multi-agent crew orchestration with shared context and defined roles.. AI-driven agent self-improvement through execution history analysis.. Pluggable compute provider support including RunPod, Replicate, and local Ollama instances.. Comprehensive experiment lifecycle management with audit trails and KPI tracking..

What can I use FleetQ for?

Orchestrating complex multi-step AI research tasks that require human approval at specific stages.. Managing and monitoring GPU compute costs across multiple agent-based experiments.. Standardizing agent deployment using pre-built templates for engineering and content teams.. Debugging failed AI agent tasks by inspecting detailed error logs and structured JSON outputs..

How do I install FleetQ?

Install FleetQ by running: npx @escapeboy/agent-fleet-o

What MCP clients work with FleetQ?

FleetQ works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep FleetQ docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare