Natural language to CanvasXpress JSON configurations
CanvasXpress MCP Server
Natural language → CanvasXpress JSON configs, served over HTTP on port 8100.
Describe a chart in plain English. Get back a ready-to-use CanvasXpress JSON config
object ready to pass directly to new CanvasXpress(). No CanvasXpress expertise required.
"Clustered heatmap with RdBu colors and dendrograms on both axes"
"Volcano plot with log2 fold change on x-axis and -log10 p-value on y-axis"
"Violin plot of gene expression by cell type, Tableau colors"
"Survival curve for two treatment groups"
"PCA scatter plot colored by Treatment with regression ellipses"
Supports four LLM backends: Anthropic API, Amazon Bedrock, Ollama (local), and OpenAI-compatible APIs including corporate gateways.
How it works
- Your description is matched against few-shot examples using semantic vector search (sqlite-vec)
- The top 6 most relevant examples are included as context (RAG)
- A tiered system prompt is assembled from the canvasxpress-LLM knowledge base — only the content relevant to your request is included
- The configured LLM generates a validated CanvasXpress JSON config
- If headers/data are provided, all column references are validated against them
- The config is returned ready to pass to
new CanvasXpress()
Project structure
canvasxpress-mcp/
│
├── src/
│ ├── server.py — FastMCP HTTP server (main entry point)
│ └── llm_providers.py — Unified LLM backend (Anthropic, Bedrock, Ollama, OpenAI)
│
├── data/
│ ├── few_shot_examples.json — RAG examples (add more to improve accuracy)
│ └── embeddings.db — sqlite-vec vector index (built by build_index.py)
│
├── build_index.py — builds the vector index from few_shot_examples.json
│
├── test_client.py — Python test client
├── test_client.pl — Perl test client
├── test_client.mjs — Node.js test client (Node 18+)
│
├── knowledge_base_flow.svg — architecture diagram
├── requirements.txt
└── README.md
Setup
1. Python environment
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
2. Build the vector index (one-time)
Embeds the few-shot examples for semantic retrieval:
python build_index.py
Re-run whenever you add or change data/few_shot_examples.json. If you skip this
step the server still works — it falls back to text-similarity matching and logs a warning.
3. Configure your LLM provider
Choose one of the four supported providers and set the required environment variables. See the LLM providers section for full details.
# Quickstart — Anthropic (default)
export ANTHROPIC_API_KEY="sk-ant-..."
4. Start the server
python src/server.py
Server starts at: http://localhost:8100/mcp
To run on a different port:
MCP_PORT=9000 python src/server.py
Then point test clients at the new port:
MCP_URL=http://localhost:9000/mcp python test_client.py
MCP_URL=http://localhost:9000/mcp perl test_client.pl
MCP_URL=http://localhost:9000/mcp node test_client.mjs
5. Debug mode
See the full reasoning trace per request — provider, model, tier selection, retrieved examples, prompt size, token usage, raw LLM response, and column validation:
CX_DEBUG=1 python src/server.py
Each request prints 6 labelled steps to stderr:
── STEP 1 — RETRIEVAL ── query matched, 6 examples in 8ms
── STEP 2 — PROMPT ── system 4821 chars, user 2103 chars
── TIERED PROMPT ── Tier 2 (base+schema+data) GraphType: Heatmap
── STEP 3 — LLM CALL ── Provider: bedrock Model: anthropic.claude-sonnet-...
Latency: 1243ms Input: 3847 tokens Output: 89 tokens
── STEP 4 — RAW RESPONSE {"graphType": "Heatmap", ...}
── STEP 5 — PARSED CONFIG graphType: Heatmap, keys: [...]
── STEP 6 — VALIDATION ── ✅ All column references valid
LLM providers
The provider is selected via the LLM_PROVIDER environment variable. All provider
switching is handled in src/llm_providers.py — server.py is unchanged regardless
of which backend is active.
Anthropic (default)
Direct access to the Anthropic API.
export LLM_PROVIDER=anthropic # optional — this is the default
export ANTHROPIC_API_KEY="sk-ant-..."
export LLM_MODEL=claude-sonnet-4-20250514 # optional — this is the default
python src/server.py
No extra dependencies required beyond requirements.txt.
Amazon Bedrock
Access Anthropic models through your AWS account via the Bedrock Converse API. Uses your existing AWS credentials — IAM roles, SSO profiles, and temporary credentials are all supported via the standard boto3 credential chain.
pip install boto3
export LLM_PROVIDER=bedrock
export AWS_REGION=us-east-1
# Option A — explicit credentials
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS
Environment Variables
LLM_PROVIDERThe LLM backend to use (anthropic, bedrock, ollama, or openai)ANTHROPIC_API_KEYAPI key for Anthropic modelsAWS_REGIONAWS region for Bedrock providerCX_DEBUGEnable debug mode to see reasoning tracesConfiguration
{"mcpServers": {"canvasxpress": {"command": "python", "args": ["/path/to/canvasxpress-mcp/src/server.py"]}}}