CanvasXpress MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add canvasxpress-mcp -- python "<FULL_PATH_TO_CANVASXPRESS_MCP>/dist/index.js"

Replace <FULL_PATH_TO_CANVASXPRESS_MCP>/dist/index.js with the actual folder you prepared in step 1.

README.md

Natural language to CanvasXpress JSON configurations

CanvasXpress MCP Server

Natural language → CanvasXpress JSON configs, served over HTTP on port 8100.

Describe a chart in plain English. Get back a ready-to-use CanvasXpress JSON config object ready to pass directly to new CanvasXpress(). No CanvasXpress expertise required.

"Clustered heatmap with RdBu colors and dendrograms on both axes"
"Volcano plot with log2 fold change on x-axis and -log10 p-value on y-axis"
"Violin plot of gene expression by cell type, Tableau colors"
"Survival curve for two treatment groups"
"PCA scatter plot colored by Treatment with regression ellipses"

Supports four LLM backends: Anthropic API, Amazon Bedrock, Ollama (local), and OpenAI-compatible APIs including corporate gateways.


How it works

Knowledge base flow

  1. Your description is matched against few-shot examples using semantic vector search (sqlite-vec)
  2. The top 6 most relevant examples are included as context (RAG)
  3. A tiered system prompt is assembled from the canvasxpress-LLM knowledge base — only the content relevant to your request is included
  4. The configured LLM generates a validated CanvasXpress JSON config
  5. If headers/data are provided, all column references are validated against them
  6. The config is returned ready to pass to new CanvasXpress()

Project structure

canvasxpress-mcp/
│
├── src/
│   ├── server.py           — FastMCP HTTP server (main entry point)
│   └── llm_providers.py    — Unified LLM backend (Anthropic, Bedrock, Ollama, OpenAI)
│
├── data/
│   ├── few_shot_examples.json  — RAG examples (add more to improve accuracy)
│   └── embeddings.db           — sqlite-vec vector index (built by build_index.py)
│
├── build_index.py          — builds the vector index from few_shot_examples.json
│
├── test_client.py          — Python test client
├── test_client.pl          — Perl test client
├── test_client.mjs         — Node.js test client (Node 18+)
│
├── knowledge_base_flow.svg — architecture diagram
├── requirements.txt
└── README.md

Setup

1. Python environment

python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

2. Build the vector index (one-time)

Embeds the few-shot examples for semantic retrieval:

python build_index.py

Re-run whenever you add or change data/few_shot_examples.json. If you skip this step the server still works — it falls back to text-similarity matching and logs a warning.

3. Configure your LLM provider

Choose one of the four supported providers and set the required environment variables. See the LLM providers section for full details.

# Quickstart — Anthropic (default)
export ANTHROPIC_API_KEY="sk-ant-..."

4. Start the server

python src/server.py

Server starts at: http://localhost:8100/mcp

To run on a different port:

MCP_PORT=9000 python src/server.py

Then point test clients at the new port:

MCP_URL=http://localhost:9000/mcp python test_client.py
MCP_URL=http://localhost:9000/mcp perl test_client.pl
MCP_URL=http://localhost:9000/mcp node test_client.mjs

5. Debug mode

See the full reasoning trace per request — provider, model, tier selection, retrieved examples, prompt size, token usage, raw LLM response, and column validation:

CX_DEBUG=1 python src/server.py

Each request prints 6 labelled steps to stderr:

── STEP 1 — RETRIEVAL ──   query matched, 6 examples in 8ms
── STEP 2 — PROMPT ──      system 4821 chars, user 2103 chars
── TIERED PROMPT ──        Tier 2 (base+schema+data)  GraphType: Heatmap
── STEP 3 — LLM CALL ──    Provider: bedrock  Model: anthropic.claude-sonnet-...
                            Latency: 1243ms  Input: 3847 tokens  Output: 89 tokens
── STEP 4 — RAW RESPONSE   {"graphType": "Heatmap", ...}
── STEP 5 — PARSED CONFIG  graphType: Heatmap, keys: [...]
── STEP 6 — VALIDATION ──  ✅ All column references valid

LLM providers

The provider is selected via the LLM_PROVIDER environment variable. All provider switching is handled in src/llm_providers.pyserver.py is unchanged regardless of which backend is active.

Anthropic (default)

Direct access to the Anthropic API.

export LLM_PROVIDER=anthropic          # optional — this is the default
export ANTHROPIC_API_KEY="sk-ant-..."
export LLM_MODEL=claude-sonnet-4-20250514  # optional — this is the default
python src/server.py

No extra dependencies required beyond requirements.txt.

Amazon Bedrock

Access Anthropic models through your AWS account via the Bedrock Converse API. Uses your existing AWS credentials — IAM roles, SSO profiles, and temporary credentials are all supported via the standard boto3 credential chain.

pip install boto3

export LLM_PROVIDER=bedrock
export AWS_REGION=us-east-1

# Option A — explicit credentials
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS

Environment Variables

LLM_PROVIDERThe LLM backend to use (anthropic, bedrock, ollama, or openai)
ANTHROPIC_API_KEYAPI key for Anthropic models
AWS_REGIONAWS region for Bedrock provider
CX_DEBUGEnable debug mode to see reasoning traces

Configuration

claude_desktop_config.json
{"mcpServers": {"canvasxpress": {"command": "python", "args": ["/path/to/canvasxpress-mcp/src/server.py"]}}}

Try it

Create a clustered heatmap with RdBu colors and dendrograms on both axes.
Generate a volcano plot with log2 fold change on x-axis and -log10 p-value on y-axis.
Make a violin plot of gene expression by cell type using Tableau colors.
Create a survival curve for two treatment groups.
Generate a PCA scatter plot colored by Treatment with regression ellipses.

Frequently Asked Questions

What are the key features of CanvasXpress MCP Server?

Converts natural language descriptions into validated CanvasXpress JSON configurations. Uses RAG with semantic vector search to retrieve relevant few-shot examples. Supports four LLM backends: Anthropic, Amazon Bedrock, Ollama, and OpenAI-compatible APIs. Validates column references against provided data headers and types. Provides a tiered system prompt based on the CanvasXpress knowledge base.

What can I use CanvasXpress MCP Server for?

Rapidly generating complex chart configurations without manual CanvasXpress API knowledge. Standardizing visualization styles across a team using a shared knowledge base. Automating the creation of scientific plots from natural language data descriptions. Integrating advanced chart generation into LLM-powered data analysis workflows.

How do I install CanvasXpress MCP Server?

Install CanvasXpress MCP Server by running: python3.11 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt

What MCP clients work with CanvasXpress MCP Server?

CanvasXpress MCP Server works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep CanvasXpress MCP Server docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare