Prediction stability engine for AI agents.
CI-1T MCP Server
Version: 1.7.0
Last Updated: February 27, 2026
License: Proprietary
MCP (Model Context Protocol) server for the CI-1T prediction stability engine. Lets AI agents — Claude Desktop, Cursor, Windsurf, VS Code Copilot, and any MCP-compatible client — evaluate model stability, manage fleet sessions, and control API keys directly.
One credential. One env var. That's it.
Tools (20) + Resources (1)
| Tool | Description | Auth |
|---|---|---|
evaluate |
Evaluate prediction stability (floats or Q0.16) | API key |
fleet_evaluate |
Fleet-wide multi-node evaluation (floats or Q0.16) | API key |
probe |
Probe any LLM for instability (3x same prompt). BYOM mode: bring your own model via OpenAI-compatible API | API key or BYOM |
health |
Check CI-1T engine status | API key |
fleet_session_create |
Create a persistent fleet session | API key |
fleet_session_round |
Submit a scoring round | API key |
fleet_session_state |
Get session state (read-only) | API key |
fleet_session_list |
List active fleet sessions | API key |
fleet_session_delete |
Delete a fleet session | API key |
list_api_keys |
List user's API keys | API key |
create_api_key |
Generate and register a new API key | API key |
delete_api_key |
Delete an API key by ID | API key |
get_invoices |
Get billing history (Stripe) | API key |
onboarding |
Welcome guide + setup instructions | None |
interpret_scores |
Statistical breakdown of scores | None |
convert_scores |
Convert between floats and Q0.16 | None |
generate_config |
Integration boilerplate for any framework | None |
compare_windows |
Compare baseline vs recent episodes for drift detection | None |
alert_check |
Check episodes against custom thresholds, return alerts | None |
visualize |
Interactive HTML visualization of evaluate results | None |
| Resource | URI | Description |
|---|---|---|
tools_guide |
ci1t://tools-guide |
Full usage guide: response schemas, chaining patterns, fleet workflow, thresholds, example pipelines |
Onboarding
New users get guided setup automatically. If no API key is configured:
- Startup log prints a hint: "Create a free account at collapseindex.org — 1,000 free credits on signup"
onboardingtool returns a full welcome guide with account status, setup steps, config examples, available tools, and pricing- Auth-guarded tools return a friendly error with specific setup instructions instead of a raw 401
- Utility tools (
interpret_scores,convert_scores,generate_config) always work — no auth, no credits
Every new account gets 1,000 free credits (no credit card required), enough for 1,000 evaluation episodes.
Setup
Environment Variables
| Variable | Required | Description |
|---|---|---|
CI1T_API_KEY |
Yes | Your ci_... API key — single credential for all tools |
CI1T_BASE_URL |
No | API base URL (default: https://collapseindex.org) |
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
Cursor / Windsurf
Add to .cursor/mcp.json or equivalent:
{
"mcpServers": {
"ci1t": {
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
VS Code (GitHub Copilot)
Add to .vscode/mcp.json:
{
"servers": {
"ci1t": {
"type": "stdio",
"command": "docker",
"args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
"env": {
"CI1T_API_KEY": "ci_your_key_here"
}
}
}
}
Run from source (no Docker)
git clone https://github.com/collapseindex/ci1t-mcp.git
cd ci1t-mcp
npm install
npm run build
# Set env var and run
CI1T_API_KEY=ci_xxx node dist/index.js
Build Docker Image
docker build -t collapseindex/ci1t-mcp .
Example Usage
Once connected, an AI agent can:
"Evaluate these prediction scores: 45000, 32000, 51000, 48000, 29000, 55000"
The agent calls evaluate with scores: [45000, 32000, 51000, 48000, 29000, 55000] and gets back stability metrics per episode, including credits used and remaining.
"Create a fleet session with 4 nodes named GPT-4, Claude, Gemini, Llama"
"List my API keys"
"Probe this prompt for stability: What is the capital of France?"
"Probe my local Ollama llama3 model with: What is the meaning of life?"
The agent calls probe in BYOM mode — sends the prompt 3x to http://localhost:11434/v1 an
Tools (20)
evaluateEvaluate prediction stability (floats or Q0.16)fleet_evaluateFleet-wide multi-node evaluation (floats or Q0.16)probeProbe any LLM for instability (3x same prompt)healthCheck CI-1T engine statusfleet_session_createCreate a persistent fleet sessionfleet_session_roundSubmit a scoring roundfleet_session_stateGet session state (read-only)fleet_session_listList active fleet sessionsfleet_session_deleteDelete a fleet sessionlist_api_keysList user's API keyscreate_api_keyGenerate and register a new API keydelete_api_keyDelete an API key by IDget_invoicesGet billing history (Stripe)onboardingWelcome guide + setup instructionsinterpret_scoresStatistical breakdown of scoresconvert_scoresConvert between floats and Q0.16generate_configIntegration boilerplate for any frameworkcompare_windowsCompare baseline vs recent episodes for drift detectionalert_checkCheck episodes against custom thresholds, return alertsvisualizeInteractive HTML visualization of evaluate resultsEnvironment Variables
CI1T_API_KEYrequiredYour ci_ API key for all toolsCI1T_BASE_URLAPI base URL (default: https://collapseindex.org)Configuration
{"mcpServers": {"ci1t": {"command": "docker", "args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"], "env": {"CI1T_API_KEY": "ci_your_key_here"}}}}