Execute Python code on GPU compute nodes via JupyterLab on SLURM clusters
jlab-mcp
A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.
Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.
Architecture
Claude Code
↕ stdio
MCP Server
↕ HTTP/WebSocket
JupyterLab (SLURM compute node or local subprocess) ← one server, many kernels
↕
IPython Kernels (GPU access)
JupyterLab runs either on a SLURM compute node (HPC clusters) or as a local subprocess (laptops/workstations). The server is managed separately from the MCP server — you start it with jlab-mcp start and it keeps running across Claude Code sessions. All sessions create separate kernels on this shared server. Each project directory gets its own JupyterLab instance — the status file is scoped by a hash of the working directory where jlab-mcp start was run.
Local Mode
On machines without SLURM (laptops, workstations), jlab-mcp automatically runs JupyterLab as a local subprocess. Mode is auto-detected: if sbatch is on PATH, SLURM mode is used; otherwise, local mode.
Override with an environment variable:
export JLAB_MCP_RUN_MODE=local # force local mode
export JLAB_MCP_RUN_MODE=slurm # force SLURM mode
In local mode, jlab-mcp start runs in the foreground — press Ctrl+C to stop. The status file uses the same format as SLURM mode, so the MCP server works identically in both modes.
Setup
# Install (no git clone needed)
uv tool install git+https://github.com/kdkyum/jlab-mcp.git
The SLURM job activates .venv in the current working directory. Set up your project's venv on the shared filesystem with the compute dependencies:
cd /shared/fs/my-project
uv venv
uv pip install jupyterlab ipykernel matplotlib numpy
uv pip install torch --index-url https://download.pytorch.org/whl/cu126 # GPU support
Usage
1. Start the compute node
In a separate terminal, start the SLURM job:
jlab-mcp start # uses default time limit (4h)
jlab-mcp start 24:00:00 # 24 hour time limit
jlab-mcp start 1-00:00:00 # 1 day
This submits the job and waits until JupyterLab is ready:
SLURM job 24215408 submitted, waiting in queue...
Job running on ravg1011, JupyterLab starting...
JupyterLab ready at http://ravg1011:18432
2. Use Claude Code
In another terminal, start Claude Code. The MCP server connects to the running JupyterLab automatically.
3. Stop when done
jlab-mcp stop
CLI Commands
| Command | Description |
|---|---|
jlab-mcp start [TIME] [--debug] |
Start JupyterLab and wait until ready. In SLURM mode, submits a job and polls until the server responds. In local mode, spawns a subprocess and blocks in the foreground. Optional TIME overrides JLAB_MCP_SLURM_TIME (e.g. 24:00:00). Skips submission if an existing server is still running. |
jlab-mcp stop |
Stop JupyterLab. In SLURM mode, runs scancel. In local mode, sends SIGTERM to the subprocess. Removes the status file in both cases. |
jlab-mcp wait |
Poll the status file from another terminal until the server is ready (up to 10 min). Prints state transitions (pending → starting → ready). Useful for scripts or for monitoring start progress from a separate shell. |
jlab-mcp status |
Print server state, mode, hostname, port, and whether the process/job is alive. Lists active kernels with execution state and last activity time. Queries GPU memory and utilization via nvidia-smi on a temporary kernel. |
jlab-mcp |
Run MCP server (stdio transport, used by Claude Code — not run manually) |
All commands accept --debug to enable verbose logging (status file reads, SLURM parameters, health check attempts, connection file paths) on stderr.
The SLURM job survives Claude Code restarts. You only need to run jlab-mcp start once per work session.
Configuration
All settings are configurable via environment variables. No values are hardcoded for a specific cluster.
| Environment Variable | Default | Description |
|---|---|---|
JLAB_MCP_DIR |
~/.jlab-mcp |
Base working directory |
JLAB_MCP_NOTEBOOK_DIR |
./notebooks |
Notebook storage (relative to cwd) |
JLAB_MCP_LOG_DIR |
~/.jlab-mcp/logs |
SLURM job logs |
JLAB_MCP_STATUS_DIR |
~/.jlab-mcp/servers/{name}-{hash} |
Per-project status directory (auto-derived from cwd) |
JLAB_MCP_CONNECTION_DIR |
~/.jlab-mcp/connections |
Connection info files |
JLAB_MCP_SLURM_PARTITION |
gpu |
SLURM partition |
JLAB_MCP_SLURM_GRES |
gpu:1 |
SLURM generic resource |
JLAB_MCP_SLURM_CPUS |
4 |
CPUs per task |
JLAB_MCP_SLURM_MEM |
32000 |
Memory in MB |
JLAB_MCP_SLURM_TIME |
4:00:00 |
W |
Environment Variables
JLAB_MCP_RUN_MODEForce local or slurm modeJLAB_MCP_DIRBase working directoryJLAB_MCP_SLURM_PARTITIONSLURM partitionJLAB_MCP_SLURM_GRESSLURM generic resourceConfiguration
{"mcpServers": {"jlab-mcp": {"command": "jlab-mcp"}}}