Hive Compute MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
pip install hive-mcp
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add hive-compute -- node "<FULL_PATH_TO_HIVE_COMPUTE_MCP>/dist/index.js"

Replace <FULL_PATH_TO_HIVE_COMPUTE_MCP>/dist/index.js with the actual folder you prepared in step 1.

README.md

Pool idle LAN machines into a compute cluster for AI agents

hive-mcp

Distributed compute MCP server — pool idle LAN machines into a compute cluster for AI agents.

hive-mcp architecture

The Problem

Running CPU-intensive agentic workloads (backtesting, simulations, hyperparameter sweeps) can peg your host machine at 100% with just 6-7 subagents. Meanwhile, other machines on your LAN sit idle with dozens of cores unused.

The Solution

hive-mcp turns idle machines on your LAN into a unified compute pool, accessible via MCP from Claude Code, Cursor, Copilot, or any MCP-compatible AI tool.


  Host                     Worker A            Worker B
  +-----------------+      +----------------+  +----------------+
  | Claude Code     |      | hive worker    |  | hive worker    |
  | hive-mcp broker |<---->| daemon         |  | daemon         |
  | (MCP + WS)      |  ws  | auto-discovered|  | auto-discovered|
  +-----------------+      +----------------+  +----------------+
        8 cores               14 cores             6 cores
                    = 28 total cores

Quick Start

1. Install

pip install hive-mcp

2. Start the Broker (host machine)

hive broker
# Prints the shared secret and starts listening

3. Join Workers (worker machines)

Worker machines are headless compute — they only need Python and hive-mcp. No Claude Code, no AI tools, no API keys. They just execute tasks and return results.

# Copy the secret from the broker, then:
hive join --secret <token>
# Auto-discovers broker via mDNS — no address needed!

4. Configure Claude Code

Register hive-mcp as an MCP server:

claude mcp add hive-mcp -- hive broker

This writes the config to ~/.claude.json scoped to your current project directory.

Now Claude Code can submit compute tasks to your cluster:

You: "Run backtests for these 20 parameter combinations"

Claude Code: I'll run 6 locally and submit 14 to hive...
  submit_task(code="run_backtest(params_7)", priority=1)
  submit_task(code="run_backtest(params_8)", priority=1)
  ...

MCP Tools

Tool Description
create_batch Create a named batch for grouping related tasks
close_batch Close a batch after all tasks are submitted
get_batch_status Get task counts and status for a batch
get_batch_results Retrieve all results from a batch in one call
submit_task Submit a Python or shell task (optionally to a batch)
get_task_status Check if a task is queued, running, or complete
get_task_result Retrieve the output of a completed task
pull_task Pull a queued task back for local execution
report_local_result Report result of a locally-executed pulled task
cancel_task Cancel a pending or running task
list_workers See all connected workers and their capacity
get_cluster_status Overview of the entire cluster

Features

  • Zero-config discovery — workers find the broker automatically via mDNS
  • Adaptive capacity — workers monitor CPU and reject tasks when overloaded (--max-cpu 80)
  • File transfer — send input files to workers, collect output files back
  • Local fallback — pull queued tasks back when local CPU frees up
  • Subprocess isolation — tasks can't crash the worker daemon
  • Priority queue — higher-priority tasks run first
  • Auto-reconnect — workers reconnect with exponential backoff
  • Claude Code hookhive context injects cluster info into every prompt
  • Python SDK — programmatic access via HiveClient
  • Shell tasks — run shell commands, not just Python

CLI Reference

hive broker                     # Start broker + MCP server
hive join                       # Join as worker (auto-discover broker)
hive join --broker-addr IP:PORT # Join with explicit address
hive join --max-cpu 60          # Limit CPU usage to 60%
hive join --max-tasks 4         # Hard cap at 4 concurrent tasks
hive status                     # Show cluster status
hive secret                     # Show/generate shared secret
hive context                    # Output machine + cluster info (for hooks)
hive tls-setup                  # Generate self-signed TLS certificates

Claude Code Hook

Add automatic cluster awareness to every prompt:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "command": "hive context",
        "timeout": 3000
      }
    ]
  }
}

This injects:

[hive-mcp] Local machine: 8 cores / 16 threads, CPU: 45%, RAM: 14GB free / 32GB total
[hive-mcp] Cluster: 2 workers online (20 cores), 0 queued, 3 active
[hive-mcp] Tip: 20 remote cores available via hive. Use submit_task() for overflow.

Python SDK

from hive_mcp.client.sdk import HiveClient

async with HiveClient("192.168.1.100", 7933, secret="...") as client:
    task = await client.submit("print('hello from hive')")
    result = await client.wait(task["task_id"])

Tools (12)

create_batchCreate a named batch for grouping related tasks
close_batchClose a batch after all tasks are submitted
get_batch_statusGet task counts and status for a batch
get_batch_resultsRetrieve all results from a batch in one call
submit_taskSubmit a Python or shell task (optionally to a batch)
get_task_statusCheck if a task is queued, running, or complete
get_task_resultRetrieve the output of a completed task
pull_taskPull a queued task back for local execution
report_local_resultReport result of a locally-executed pulled task
cancel_taskCancel a pending or running task
list_workersSee all connected workers and their capacity
get_cluster_statusOverview of the entire cluster

Configuration

claude_desktop_config.json
{"mcpServers": {"hive-mcp": {"command": "hive", "args": ["broker"]}}}

Try it

Run backtests for these 20 parameter combinations using the cluster.
List all connected workers and check their current CPU capacity.
Submit a shell task to run a simulation script on the cluster.
Check the status of my current batch of tasks and retrieve the results.

Frequently Asked Questions

What are the key features of Hive Compute?

Zero-config worker discovery via mDNS. Adaptive CPU capacity monitoring and load balancing. Support for both Python and shell command execution. Priority-based task queuing and local fallback. Subprocess isolation for worker safety.

What can I use Hive Compute for?

Offloading CPU-intensive backtesting workloads to idle LAN machines. Running large-scale hyperparameter sweeps across multiple local computers. Distributing heavy simulation tasks to prevent host machine overload. Managing compute resources for AI agents in a local development environment.

How do I install Hive Compute?

Install Hive Compute by running: pip install hive-mcp

What MCP clients work with Hive Compute?

Hive Compute works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Hive Compute docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare