JLab MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
git clone https://github.com/kdkyum/jlab-mcp
cd jlab-mcp

Then follow the repository README for any remaining dependency or build steps before continuing.

2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add jlab-mcp -- node "<FULL_PATH_TO_JLAB_MCP>/dist/index.js"

Replace <FULL_PATH_TO_JLAB_MCP>/dist/index.js with the actual folder you prepared in step 1.

README.md

Execute Python code on GPU compute nodes via JupyterLab on SLURM clusters

jlab-mcp

A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.

Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.

Architecture

Claude Code
    ↕ stdio
MCP Server
    ↕ HTTP/WebSocket
JupyterLab (SLURM compute node or local subprocess)   ← one server, many kernels
    ↕
IPython Kernels (GPU access)

JupyterLab runs either on a SLURM compute node (HPC clusters) or as a local subprocess (laptops/workstations). The server is managed separately from the MCP server — you start it with jlab-mcp start and it keeps running across Claude Code sessions. All sessions create separate kernels on this shared server. Each project directory gets its own JupyterLab instance — the status file is scoped by a hash of the working directory where jlab-mcp start was run.

Local Mode

On machines without SLURM (laptops, workstations), jlab-mcp automatically runs JupyterLab as a local subprocess. Mode is auto-detected: if sbatch is on PATH, SLURM mode is used; otherwise, local mode.

Override with an environment variable:

export JLAB_MCP_RUN_MODE=local   # force local mode
export JLAB_MCP_RUN_MODE=slurm   # force SLURM mode

In local mode, jlab-mcp start runs in the foreground — press Ctrl+C to stop. The status file uses the same format as SLURM mode, so the MCP server works identically in both modes.

Setup

# Install (no git clone needed)
uv tool install git+https://github.com/kdkyum/jlab-mcp.git

The SLURM job activates .venv in the current working directory. Set up your project's venv on the shared filesystem with the compute dependencies:

cd /shared/fs/my-project
uv venv
uv pip install jupyterlab ipykernel matplotlib numpy
uv pip install torch --index-url https://download.pytorch.org/whl/cu126  # GPU support

Usage

1. Start the compute node

In a separate terminal, start the SLURM job:

jlab-mcp start              # uses default time limit (4h)
jlab-mcp start 24:00:00     # 24 hour time limit
jlab-mcp start 1-00:00:00   # 1 day

This submits the job and waits until JupyterLab is ready:

SLURM job 24215408 submitted, waiting in queue...
Job running on ravg1011, JupyterLab starting...
JupyterLab ready at http://ravg1011:18432

2. Use Claude Code

In another terminal, start Claude Code. The MCP server connects to the running JupyterLab automatically.

3. Stop when done

jlab-mcp stop

CLI Commands

Command Description
jlab-mcp start [TIME] [--debug] Start JupyterLab and wait until ready. In SLURM mode, submits a job and polls until the server responds. In local mode, spawns a subprocess and blocks in the foreground. Optional TIME overrides JLAB_MCP_SLURM_TIME (e.g. 24:00:00). Skips submission if an existing server is still running.
jlab-mcp stop Stop JupyterLab. In SLURM mode, runs scancel. In local mode, sends SIGTERM to the subprocess. Removes the status file in both cases.
jlab-mcp wait Poll the status file from another terminal until the server is ready (up to 10 min). Prints state transitions (pending → starting → ready). Useful for scripts or for monitoring start progress from a separate shell.
jlab-mcp status Print server state, mode, hostname, port, and whether the process/job is alive. Lists active kernels with execution state and last activity time. Queries GPU memory and utilization via nvidia-smi on a temporary kernel.
jlab-mcp Run MCP server (stdio transport, used by Claude Code — not run manually)

All commands accept --debug to enable verbose logging (status file reads, SLURM parameters, health check attempts, connection file paths) on stderr.

The SLURM job survives Claude Code restarts. You only need to run jlab-mcp start once per work session.

Configuration

All settings are configurable via environment variables. No values are hardcoded for a specific cluster.

Environment Variable Default Description
JLAB_MCP_DIR ~/.jlab-mcp Base working directory
JLAB_MCP_NOTEBOOK_DIR ./notebooks Notebook storage (relative to cwd)
JLAB_MCP_LOG_DIR ~/.jlab-mcp/logs SLURM job logs
JLAB_MCP_STATUS_DIR ~/.jlab-mcp/servers/{name}-{hash} Per-project status directory (auto-derived from cwd)
JLAB_MCP_CONNECTION_DIR ~/.jlab-mcp/connections Connection info files
JLAB_MCP_SLURM_PARTITION gpu SLURM partition
JLAB_MCP_SLURM_GRES gpu:1 SLURM generic resource
JLAB_MCP_SLURM_CPUS 4 CPUs per task
JLAB_MCP_SLURM_MEM 32000 Memory in MB
JLAB_MCP_SLURM_TIME 4:00:00 W

Environment Variables

JLAB_MCP_RUN_MODEForce local or slurm mode
JLAB_MCP_DIRBase working directory
JLAB_MCP_SLURM_PARTITIONSLURM partition
JLAB_MCP_SLURM_GRESSLURM generic resource

Configuration

claude_desktop_config.json
{"mcpServers": {"jlab-mcp": {"command": "jlab-mcp"}}}

Try it

Start a JupyterLab session on the GPU partition for 2 hours.
Check the status of my current SLURM job and GPU memory usage.
Execute this Python script on the remote compute node and return the results.
Stop the current JupyterLab session and cancel the SLURM job.

Frequently Asked Questions

What are the key features of JLab MCP?

Executes Python code on remote GPU-accelerated SLURM compute nodes. Supports local mode for laptops and workstations without SLURM. Maintains persistent JupyterLab sessions across Claude Code restarts. Provides real-time monitoring of GPU memory and utilization via nvidia-smi. Automatically manages SLURM job submission and polling.

What can I use JLab MCP for?

Running heavy machine learning training tasks on HPC clusters directly from Claude. Developing and testing data science notebooks on remote GPU nodes. Automating HPC job submission and result retrieval for research workflows. Bridging local IDE environments with high-performance computing resources.

How do I install JLab MCP?

Install JLab MCP by running: uv tool install git+https://github.com/kdkyum/jlab-mcp.git

What MCP clients work with JLab MCP?

JLab MCP works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep JLab MCP docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare