Self-hosted AI Agent Mission Control platform.
FleetQ - Community Edition
Self-hosted AI Agent Mission Control platform. Build, orchestrate, and monitor AI agent experiments with a visual pipeline, human-in-the-loop approvals, and full audit trail.
Cloud Version
Prefer not to self-host? FleetQ Cloud is the fully managed version — no setup, no infrastructure, free to try.
Screenshots
|
Dashboard KPI overview with active experiments, success rate, budget spend, and pending approvals. |
Agent Template Gallery Browse 14 pre-built agent templates across 5 categories. Search, filter by category, and deploy with one click. |
|
Agent LLM Configuration Per-agent provider and model selection with fallback chains. Supports Anthropic, OpenAI, Google, and local agents. |
Agent Evolution AI-driven agent self-improvement. Analyze execution history, propose personality and config changes, and apply with one click. |
|
Crew Execution Live progress tracking during multi-agent crew execution. Each task shows its assigned skill, provider, and elapsed time. |
Task Output Expand any completed task to inspect the AI-generated output, including structured JSON responses. |
|
Visual Workflow Builder DAG-based workflow editor with conditional branching, human tasks, switch nodes, and dynamic forks. |
Tool Management Manage MCP servers, built-in tools, and external integrations with risk classification and per-agent assignment. |
|
AI Assistant Sidebar Context-aware AI chat embedded in every page with 28 built-in tools for querying and managing the platform. |
Experiment Detail Full experiment lifecycle view with timeline, tasks, transitions, artifacts, metrics, and outbound delivery. |
|
Settings & Webhooks Global platform settings, AI provider keys (BYOK), outbound connectors, and webhook configuration. |
Error Handling Failed tasks display detailed error information including provider, error type, and request IDs for debugging. |
Features
- Experiment Pipeline -- 20-state machine with automatic stage progression (scoring, planning, building, approval, execution, metrics collection)
- AI Agents -- Configure agents with roles, goals, backstories, personality traits, and skill assignments
- Agent Templates -- 14 pre-built templates across 5 categories (engineering, content, business, design, research)
- Agent Evolution -- AI-driven self-improvement: analyze execution history, propose config changes, and apply improvements
- Agent Crews -- Multi-agent teams with lead/member roles and shared context
- Skills -- Reusable AI skill definitions (LLM, connector, rule, hybrid, browser, RunPod, GPU compute) with versioning and cost tracking
- RunPod GPU Integration -- Invoke RunPod serverless endpoints or manage full GPU pod lifecycles as skills; BYOK API key; spot pricing; cost tracking
- Pluggable Compute Providers --
gpu_computeskill type backed by RunPod, Replicate, Fal.ai, and Vast.ai; configure viacompute_manageMCP tool; zero platform credits - Local LLM Support -- Run Ollama or any OpenAI-compatible server (LM Studio, vLLM, llama.cpp) as a provider; 17 preset Ollama models; zero cost; SSRF protection
- Integrations -- Connect GitHub, Slack, Notion, Airtable, Linear, Stripe, and generic webhooks/polling sources via unified driver interface with OAuth 2.0 support
- Playbooks -- Sequential o
Tools (1)
compute_manageManage compute providers and GPU resources for agent skills.Environment Variables
FLEETQ_API_KEYrequiredAPI key for authenticating with the FleetQ platform.Configuration
{"mcpServers": {"fleetq": {"command": "npx", "args": ["@escapeboy/agent-fleet-o"]}}}