CI-1T Prediction Stability Engine MCP Server

1

Add it to Claude Code

Run this in a terminal.

Run in terminal
claude mcp add -e "CI1T_API_KEY=${CI1T_API_KEY}" ci1t-prediction-engine -- docker run -i --rm collapseindex/ci1t-mcp
Required:CI1T_API_KEY+ 1 optional
README.md

Prediction stability engine for AI agents.

CI-1T MCP Server

Version: 1.7.0
Last Updated: February 27, 2026
License: Proprietary

MCP (Model Context Protocol) server for the CI-1T prediction stability engine. Lets AI agents — Claude Desktop, Cursor, Windsurf, VS Code Copilot, and any MCP-compatible client — evaluate model stability, manage fleet sessions, and control API keys directly.

One credential. One env var. That's it.

Tools (20) + Resources (1)

Tool Description Auth
evaluate Evaluate prediction stability (floats or Q0.16) API key
fleet_evaluate Fleet-wide multi-node evaluation (floats or Q0.16) API key
probe Probe any LLM for instability (3x same prompt). BYOM mode: bring your own model via OpenAI-compatible API API key or BYOM
health Check CI-1T engine status API key
fleet_session_create Create a persistent fleet session API key
fleet_session_round Submit a scoring round API key
fleet_session_state Get session state (read-only) API key
fleet_session_list List active fleet sessions API key
fleet_session_delete Delete a fleet session API key
list_api_keys List user's API keys API key
create_api_key Generate and register a new API key API key
delete_api_key Delete an API key by ID API key
get_invoices Get billing history (Stripe) API key
onboarding Welcome guide + setup instructions None
interpret_scores Statistical breakdown of scores None
convert_scores Convert between floats and Q0.16 None
generate_config Integration boilerplate for any framework None
compare_windows Compare baseline vs recent episodes for drift detection None
alert_check Check episodes against custom thresholds, return alerts None
visualize Interactive HTML visualization of evaluate results None
Resource URI Description
tools_guide ci1t://tools-guide Full usage guide: response schemas, chaining patterns, fleet workflow, thresholds, example pipelines

Onboarding

New users get guided setup automatically. If no API key is configured:

  • Startup log prints a hint: "Create a free account at collapseindex.org — 1,000 free credits on signup"
  • onboarding tool returns a full welcome guide with account status, setup steps, config examples, available tools, and pricing
  • Auth-guarded tools return a friendly error with specific setup instructions instead of a raw 401
  • Utility tools (interpret_scores, convert_scores, generate_config) always work — no auth, no credits

Every new account gets 1,000 free credits (no credit card required), enough for 1,000 evaluation episodes.

Setup

Environment Variables

Variable Required Description
CI1T_API_KEY Yes Your ci_... API key — single credential for all tools
CI1T_BASE_URL No API base URL (default: https://collapseindex.org)

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "ci1t": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
      "env": {
        "CI1T_API_KEY": "ci_your_key_here"
      }
    }
  }
}

Cursor / Windsurf

Add to .cursor/mcp.json or equivalent:

{
  "mcpServers": {
    "ci1t": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
      "env": {
        "CI1T_API_KEY": "ci_your_key_here"
      }
    }
  }
}

VS Code (GitHub Copilot)

Add to .vscode/mcp.json:

{
  "servers": {
    "ci1t": {
      "type": "stdio",
      "command": "docker",
      "args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"],
      "env": {
        "CI1T_API_KEY": "ci_your_key_here"
      }
    }
  }
}

Run from source (no Docker)

git clone https://github.com/collapseindex/ci1t-mcp.git
cd ci1t-mcp
npm install
npm run build

# Set env var and run
CI1T_API_KEY=ci_xxx node dist/index.js

Build Docker Image

docker build -t collapseindex/ci1t-mcp .

Example Usage

Once connected, an AI agent can:

"Evaluate these prediction scores: 45000, 32000, 51000, 48000, 29000, 55000"

The agent calls evaluate with scores: [45000, 32000, 51000, 48000, 29000, 55000] and gets back stability metrics per episode, including credits used and remaining.

"Create a fleet session with 4 nodes named GPT-4, Claude, Gemini, Llama"

"List my API keys"

"Probe this prompt for stability: What is the capital of France?"

"Probe my local Ollama llama3 model with: What is the meaning of life?"

The agent calls probe in BYOM mode — sends the prompt 3x to http://localhost:11434/v1 an

Tools (20)

evaluateEvaluate prediction stability (floats or Q0.16)
fleet_evaluateFleet-wide multi-node evaluation (floats or Q0.16)
probeProbe any LLM for instability (3x same prompt)
healthCheck CI-1T engine status
fleet_session_createCreate a persistent fleet session
fleet_session_roundSubmit a scoring round
fleet_session_stateGet session state (read-only)
fleet_session_listList active fleet sessions
fleet_session_deleteDelete a fleet session
list_api_keysList user's API keys
create_api_keyGenerate and register a new API key
delete_api_keyDelete an API key by ID
get_invoicesGet billing history (Stripe)
onboardingWelcome guide + setup instructions
interpret_scoresStatistical breakdown of scores
convert_scoresConvert between floats and Q0.16
generate_configIntegration boilerplate for any framework
compare_windowsCompare baseline vs recent episodes for drift detection
alert_checkCheck episodes against custom thresholds, return alerts
visualizeInteractive HTML visualization of evaluate results

Environment Variables

CI1T_API_KEYrequiredYour ci_ API key for all tools
CI1T_BASE_URLAPI base URL (default: https://collapseindex.org)

Configuration

claude_desktop_config.json
{"mcpServers": {"ci1t": {"command": "docker", "args": ["run", "-i", "--rm", "collapseindex/ci1t-mcp"], "env": {"CI1T_API_KEY": "ci_your_key_here"}}}}

Try it

Evaluate these prediction scores: 45000, 32000, 51000, 48000, 29000, 55000
Create a fleet session with 4 nodes named GPT-4, Claude, Gemini, Llama
Probe this prompt for stability: What is the capital of France?
Compare my baseline scores with the recent episode data to check for drift
List my active API keys and check my billing history

Frequently Asked Questions

What are the key features of CI-1T Prediction Stability Engine?

Evaluate model prediction stability using float or Q0.16 formats. Perform fleet-wide multi-node evaluation and session management. Probe LLMs for instability by running prompts multiple times. Detect model drift by comparing baseline and recent episode windows. Generate interactive HTML visualizations of evaluation results.

What can I use CI-1T Prediction Stability Engine for?

Monitoring AI agent fleet performance across different model providers. Validating model output consistency for high-stakes prediction tasks. Automating drift detection in production AI pipelines. Managing API credentials and billing for CI-1T services directly from the IDE.

How do I install CI-1T Prediction Stability Engine?

Install CI-1T Prediction Stability Engine by running: docker run -i --rm collapseindex/ci1t-mcp

What MCP clients work with CI-1T Prediction Stability Engine?

CI-1T Prediction Stability Engine works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep CI-1T Prediction Stability Engine docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare