Databricks MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone https://github.com/ChrisChoTW/databricks-mcp.git
cd databricks-mcp
uv sync
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add -e "DATABRICKS_SERVER_HOSTNAME=${DATABRICKS_SERVER_HOSTNAME}" -e "DATABRICKS_HTTP_PATH=${DATABRICKS_HTTP_PATH}" -e "DATABRICKS_TOKEN=${DATABRICKS_TOKEN}" databricks-mcp -- node "<FULL_PATH_TO_DATABRICKS_MCP>/dist/index.js"

Replace <FULL_PATH_TO_DATABRICKS_MCP>/dist/index.js with the actual folder you prepared in step 1.

Required:DATABRICKS_SERVER_HOSTNAMEDATABRICKS_HTTP_PATHDATABRICKS_TOKEN
README.md

A read-only MCP server for Databricks SQL, metadata, and monitoring.

databricks-mcp

Read this in other languages: 正體中文

A read-only MCP (Model Context Protocol) server for Databricks, enabling Claude to query Databricks SQL, browse metadata, and monitor jobs/pipelines.

Features

  • SQL Queries: Execute SELECT, SHOW, DESCRIBE queries (write operations blocked)
  • Metadata Browsing: List catalogs, schemas, tables, and search tables
  • Delta Lake: View table history, details, and grants
  • Jobs & Pipelines: List and monitor Databricks Jobs and DLT Pipelines
  • Query History: Browse SQL query history with filters
  • Cluster Metrics: Monitor CPU, memory, network usage from system tables

Installation

Prerequisites

  • Python 3.13+
  • uv package manager
  • Databricks workspace with SQL Warehouse

Setup

# Clone the repository
git clone https://github.com/ChrisChoTW/databricks-mcp.git
cd databricks-mcp

# Install dependencies
uv sync

# Create .env file
cp .env.example .env

Configuration

Edit .env with your Databricks credentials:

DATABRICKS_SERVER_HOSTNAME=your-workspace.cloud.databricks.com
DATABRICKS_HTTP_PATH=/sql/1.0/warehouses/your-warehouse-id
DATABRICKS_TOKEN=your-personal-access-token

Usage

With Claude Code

Add to your Claude Code MCP configuration (~/.claude.json):

{
  "mcpServers": {
    "databricks-sql": {
      "type": "stdio",
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/databricks-mcp",
        "run",
        "python",
        "server.py"
      ],
      "env": {
        "DATABRICKS_SERVER_HOSTNAME": "your-workspace.cloud.databricks.com",
        "DATABRICKS_HTTP_PATH": "/sql/1.0/warehouses/your-warehouse-id",
        "DATABRICKS_TOKEN": "your-token"
      }
    }
  }
}

Standalone

uv run python server.py

Available Tools

SQL & Metadata

Tool Description
databricks_query Execute SQL queries (read-only)
list_catalogs List all catalogs
list_schemas List schemas in a catalog
list_tables List tables in a schema
get_table_schema Get table structure (DESCRIBE EXTENDED)
search_tables Search tables by name

Delta Lake

Tool Description
get_table_history View Delta table change history
get_table_detail View Delta table details
get_grants View object permissions
list_volumes List Unity Catalog volumes

Jobs & Pipelines

Tool Description
list_jobs List Databricks Jobs
get_job Get job details
list_job_runs List job run history
get_job_run Get run details
list_pipelines List DLT Pipelines
get_pipeline Get pipeline status

Compute & Monitoring

Tool Description
list_query_history List SQL query history
list_warehouses List SQL Warehouses
list_clusters List clusters
get_cluster_metrics Get cluster CPU/memory metrics
get_cluster_events Get cluster events

Project Structure

databricks-mcp/
├── server.py         # Entry point
├── core.py           # Shared connections and MCP instance
└── tools/
    ├── query.py      # SQL queries and metadata
    ├── delta.py      # Delta Lake and permissions
    ├── jobs.py       # Jobs management
    ├── pipelines.py  # DLT Pipelines
    ├── compute.py    # Clusters and query history
    └── metrics.py    # Cluster metrics

Security

This server is read-only by design:

  • ❌ INSERT, UPDATE, DELETE, DROP, TRUNCATE, MERGE, COPY blocked
  • ✅ SELECT, SHOW, DESCRIBE, CREATE VIEW allowed
  • Credentials are passed via environment variables (never hardcoded)

License

MIT

Contributing

Issues and pull requests are welcome!

Tools (21)

databricks_queryExecute SQL queries (read-only)
list_catalogsList all catalogs
list_schemasList schemas in a catalog
list_tablesList tables in a schema
get_table_schemaGet table structure (DESCRIBE EXTENDED)
search_tablesSearch tables by name
get_table_historyView Delta table change history
get_table_detailView Delta table details
get_grantsView object permissions
list_volumesList Unity Catalog volumes
list_jobsList Databricks Jobs
get_jobGet job details
list_job_runsList job run history
get_job_runGet run details
list_pipelinesList DLT Pipelines
get_pipelineGet pipeline status
list_query_historyList SQL query history
list_warehousesList SQL Warehouses
list_clustersList clusters
get_cluster_metricsGet cluster CPU/memory metrics
get_cluster_eventsGet cluster events

Environment Variables

DATABRICKS_SERVER_HOSTNAMErequiredThe hostname of your Databricks workspace
DATABRICKS_HTTP_PATHrequiredThe HTTP path for your SQL Warehouse
DATABRICKS_TOKENrequiredYour Databricks personal access token

Configuration

claude_desktop_config.json
{"mcpServers": {"databricks-sql": {"type": "stdio", "command": "uv", "args": ["--directory", "/path/to/databricks-mcp", "run", "python", "server.py"], "env": {"DATABRICKS_SERVER_HOSTNAME": "your-workspace.cloud.databricks.com", "DATABRICKS_HTTP_PATH": "/sql/1.0/warehouses/your-warehouse-id", "DATABRICKS_TOKEN": "your-token"}}}}

Try it

List all tables in the 'main' catalog and 'default' schema.
What is the current status of my DLT pipelines?
Show me the recent run history for the job named 'daily-etl-process'.
Get the CPU and memory metrics for my active compute clusters.
Search for tables related to 'customer_data' in the workspace.

Frequently Asked Questions

What are the key features of Databricks MCP?

Read-only SQL query execution for data analysis. Comprehensive metadata browsing for catalogs, schemas, and tables. Monitoring capabilities for Databricks Jobs and DLT Pipelines. Delta Lake table history and permission inspection. Cluster performance and query history monitoring.

What can I use Databricks MCP for?

Data analysts querying Databricks tables directly through Claude. Data engineers monitoring pipeline status and job failures. Administrators auditing table permissions and Delta history. Platform engineers checking cluster health and resource utilization.

How do I install Databricks MCP?

Install Databricks MCP by running: git clone https://github.com/ChrisChoTW/databricks-mcp.git && cd databricks-mcp && uv sync

What MCP clients work with Databricks MCP?

Databricks MCP works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep Databricks MCP docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare