Reachy Mini MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
uv pip install -e .
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add reachy-mini -- node "<FULL_PATH_TO_REACHY_MINI_MCP>/dist/index.js"

Replace <FULL_PATH_TO_REACHY_MINI_MCP>/dist/index.js with the actual folder you prepared in step 1.

README.md

An MCP server for controlling Reachy Mini robots.

Reachy Mini MCP Server

An MCP (Model Context Protocol) server for controlling Reachy Mini robots. This allows Claude Desktop and other MCP clients to interact with Reachy Mini robots through natural language.

Features

  • Dance: Play choreographed dance moves
  • Emotions: Express pre-recorded emotions
  • Head Movement: Move head in different directions
  • Camera: Capture images from the robot's camera
  • Head Tracking: Enable face tracking mode
  • 🎤 Real-Time Local TTS: Text-to-speech runs entirely on-device with streaming audio - no cloud APIs, no latency, no API costs
  • Motion Control: Stop motions and query robot status

Installation

# Clone the repository
cd reachy-mini-mcp

# Create virtual environment
uv venv --python 3.10
source .venv/bin/activate

# Install dependencies
uv pip install -e .

# Optional: Install camera support
uv pip install -e ".[camera]"

# Optional: Install speech support (text-to-speech)
uv pip install -e ".[speech]"

Configuration

Copy .env.example to .env and configure:

cp .env.example .env

Available environment variables:

Variable Description Default
REACHY_MINI_ROBOT_NAME Robot name for Zenoh discovery reachy-mini
REACHY_MINI_ENABLE_CAMERA Enable camera capture false
REACHY_MINI_HEAD_TRACKING_ENABLED Start with head tracking enabled false

Usage

Running the server directly

reachy-mini-mcp

Claude Code CLI

Add the MCP server using the claude mcp add command:

# Build from source (after cloning the repo)
claude mcp add --transport stdio reachy-mini -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"

# With camera support enabled
claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ENABLE_CAMERA=true -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"

# With custom robot name
claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ROBOT_NAME=my-robot -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"

Claude Desktop Integration

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "reachy-mini": {
      "command": "reachy-mini-mcp",
      "env": {
        "REACHY_MINI_ENABLE_CAMERA": "true"
      }
    }
  }
}

If using a virtual environment:

{
  "mcpServers": {
    "reachy-mini": {
      "command": "/path/to/reachy-mini-mcp/.venv/bin/reachy-mini-mcp",
      "env": {
        "REACHY_MINI_ENABLE_CAMERA": "true"
      }
    }
  }
}

Available Tools

`dance`

Play a dance move on the robot.

Parameters:

  • move (string, optional): Dance name or "random". Default: "random"
  • repeat (integer, optional): Number of times to repeat. Default: 1

Available moves: simple_nod, head_tilt_roll, side_to_side_sway, dizzy_spin, stumble_and_recover, interwoven_spirals, sharp_side_tilt, side_peekaboo, yeah_nod, uh_huh_tilt, neck_recoil, chin_lead, groovy_sway_and_roll, chicken_peck, side_glance_flick, polyrhythm_combo, grid_snap, pendulum_swing, jackson_square

`play_emotion`

Play a pre-recorded emotion.

Parameters:

  • emotion (string, required): Name of the emotion to play

`move_head`

Move the robot's head in a direction.

Parameters:

  • direction (string, required): One of "left", "right", "up", "down", "front"
  • duration (float, optional): Movement duration in seconds. Default: 1.0

`camera`

Capture an image from the robot's camera.

Returns: Base64-encoded JPEG image

Note: Requires REACHY_MINI_ENABLE_CAMERA=true

`head_tracking`

Toggle head tracking mode.

Parameters:

  • enabled (boolean, required): True to enable, False to disable

`stop_motion`

Stop all current and queued motions immediately.

`speak`

Make the robot speak using real-time local text-to-speech with natural head movement animation.

Parameters:

  • text (string, required): The text to speak
  • voice (string, optional): Voice to use. Default: "alba"

Available voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma

Note: Requires pocket-tts package. Install with uv pip install -e ".[speech]"

Key highlights:

  • 100% Local: Runs entirely on your machine - no internet connection required after installation
  • Real-Time Streaming: Audio is generated and streamed in real-time for instant response
  • Zero API Costs: No cloud TTS services, no per-character fees, unlimited usage
  • Low Latency: Direct local processing means minimal delay between text input and speech output
  • Privacy: Your text never leaves your device

The robot's head will naturally sway and move while speaking, creating a more lifelike interaction.

`get_status`

Get the current robot status including connection

Tools (8)

dancePlay a dance move on the robot.
play_emotionPlay a pre-recorded emotion.
move_headMove the robot's head in a direction.
cameraCapture an image from the robot's camera.
head_trackingToggle head tracking mode.
stop_motionStop all current and queued motions immediately.
speakMake the robot speak using real-time local text-to-speech with natural head movement animation.
get_statusGet the current robot status including connection.

Environment Variables

REACHY_MINI_ROBOT_NAMERobot name for Zenoh discovery
REACHY_MINI_ENABLE_CAMERAEnable camera capture
REACHY_MINI_HEAD_TRACKING_ENABLEDStart with head tracking enabled

Configuration

claude_desktop_config.json
{"mcpServers": {"reachy-mini": {"command": "reachy-mini-mcp", "env": {"REACHY_MINI_ENABLE_CAMERA": "true"}}}}

Try it

Make the robot perform a random dance move.
Speak the phrase 'Hello, I am ready to assist you' using the alba voice.
Enable head tracking mode so the robot follows movement.
Capture an image from the robot's camera.
Move the robot's head to the left for 2 seconds.

Frequently Asked Questions

What are the key features of Reachy Mini?

Natural language control for robot movements and expressions. Real-time local text-to-speech with synchronized head animations. Integrated camera capture capabilities. Pre-recorded dance and emotion playback. Low-latency motion control and status monitoring.

What can I use Reachy Mini for?

Creating interactive robotic assistants for home or office environments. Developing automated demonstrations using choreographed robot dances. Implementing privacy-focused voice interaction systems without cloud dependencies. Remote monitoring and visual inspection using the robot's camera.

How do I install Reachy Mini?

Install Reachy Mini by running: uv pip install -e .

What MCP clients work with Reachy Mini?

Reachy Mini works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Reachy Mini docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare