Spark Documentation Server MCP Server

$docker run -i --rm martoc/mcp-spark-documentation:latest
README.md

Search and retrieval tools for Apache Spark documentation

MCP Spark Documentation Server

An MCP (Model Context Protocol) server that provides search and retrieval tools for Apache Spark documentation. This server enables AI assistants like Claude to search and read Spark documentation directly.

Features

  • Full-text search using SQLite FTS5 with BM25 ranking and Porter stemming
  • Section filtering to narrow search results by documentation category
  • Sparse checkout for efficient cloning of only the docs directory from apache/spark
  • Docker support for portable deployment across projects
  • STDIO transport for seamless MCP client integration

Quick Start

Using Docker (Recommended)

# Build the Docker image (includes pre-indexed documentation)
make docker-build

# Test the server
make docker-run

Using uv (Local Development)

# Initialise the environment
make init

# Build the documentation index
make index

# Run the server
make run

Configuration

Claude Code / Claude Desktop

Add to your .mcp.json or global settings:

{
  "mcpServers": {
    "spark-documentation": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "martoc/mcp-spark-documentation:latest"]
    }
  }
}

For a locally built Docker image:

{
  "mcpServers": {
    "spark-documentation": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "mcp-spark-documentation"]
    }
  }
}

For local development without Docker:

{
  "mcpServers": {
    "spark-documentation": {
      "command": "uv",
      "args": ["run", "mcp-spark-documentation"],
      "cwd": "/path/to/mcp-spark-documentation"
    }
  }
}

MCP Tools

Tool Description
search_documentation Search Spark documentation by keyword query with optional section filtering
read_documentation Retrieve the full content of a specific documentation page

search_documentation

Search Apache Spark documentation using full-text search with stemming support.

Parameter Type Required Default Description
query string Yes - Search terms (supports stemming)
section string No None Filter by section (e.g., sql-ref, streaming, mllib)
limit integer No 10 Maximum results (1-50)

Common Sections: sql-ref, api, streaming, mllib, graphx, structured-streaming, configuration, tuning

read_documentation

Retrieve the full content of a documentation page.

Parameter Type Required Description
path string Yes Relative path to document (from search results)

CLI Commands

# Build/rebuild the documentation index
uv run spark-docs-index index
uv run spark-docs-index index --rebuild
uv run spark-docs-index index --branch master

# Show index statistics
uv run spark-docs-index stats

Development

make init       # Initialise development environment
make build      # Run full build (lint, typecheck, test)
make test       # Run tests with coverage
make format     # Format code
make lint       # Run linter
make typecheck  # Run type checker

Documentation

Licence

This project is licensed under the MIT Licence - see the LICENSE file for details.

Tools (2)

search_documentationSearch Apache Spark documentation using full-text search with stemming support.
read_documentationRetrieve the full content of a specific documentation page.

Configuration

claude_desktop_config.json
{"mcpServers": {"spark-documentation": {"command": "docker", "args": ["run", "-i", "--rm", "martoc/mcp-spark-documentation:latest"]}}}

Try it

Search the Spark documentation for how to use window functions in Spark SQL.
Find the documentation for MLlib's random forest implementation.
Read the Spark documentation page for structured-streaming configuration.
Search for BM25 ranking implementation details in the Spark docs.

Frequently Asked Questions

What are the key features of Spark Documentation Server?

Full-text search using SQLite FTS5 with BM25 ranking and Porter stemming. Section filtering to narrow search results by documentation category. Sparse checkout for efficient cloning of only the docs directory from apache/spark. Docker support for portable deployment across projects. STDIO transport for seamless MCP client integration.

What can I use Spark Documentation Server for?

AI assistants needing to reference official Apache Spark syntax and APIs. Developers looking for specific Spark SQL reference documentation. Data engineers searching for tuning and configuration best practices in Spark. Retrieving specific MLlib or GraphX documentation pages for code generation.

How do I install Spark Documentation Server?

Install Spark Documentation Server by running: docker run -i --rm martoc/mcp-spark-documentation:latest

What MCP clients work with Spark Documentation Server?

Spark Documentation Server works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Use Spark Documentation Server with Conare

Manage MCP servers visually, upload persistent context, and never start from zero with Claude Code & Codex.

Try Free