CSL-Core MCP Server

Local setup required. This server has to be cloned and prepared on your machine before you register it in Claude Code.
1

Set the server up locally

Run this once to clone and prepare the server before adding it to Claude Code.

Run in terminal
pip install csl-core
2

Register it in Claude Code

After the local setup is done, run this command to point Claude Code at the built server.

Run in terminal
claude mcp add csl-core -- node "<FULL_PATH_TO_CSL_CORE>/dist/index.js"

Replace <FULL_PATH_TO_CSL_CORE>/dist/index.js with the actual folder you prepared in step 1.

README.md

Deterministic AI safety policy engine with Z3 formal verification

CSL-Core

❤️ Our Contributors!

CSL-Core (Chimera Specification Language) is a deterministic safety layer for AI agents. Write rules in .csl files, verify them mathematically with Z3, enforce them at runtime — outside the model. The LLM never sees the rules. It simply cannot violate them.

pip install csl-core

Originally built for **Project Chimera**, now open-source for any AI system.


Why?

prompt = """You are a helpful assistant. IMPORTANT RULES:
- Never transfer more than $1000 for junior users
- Never send PII to external emails
- Never query the secrets table"""

This doesn't work. LLMs can be prompt-injected, rules are probabilistic (99% ≠ 100%), and there's no audit trail when something goes wrong.

CSL-Core flips this: rules live outside the model in compiled, Z3-verified policy files. Enforcement is deterministic — not a suggestion.


Quick Start (60 Seconds)

1. Write a Policy

Create my_policy.csl:

CONFIG {
  ENFORCEMENT_MODE: BLOCK
  CHECK_LOGICAL_CONSISTENCY: TRUE
}

DOMAIN MyGuard {
  VARIABLES {
    action: {"READ", "WRITE", "DELETE"}
    user_level: 0..5
  }

  STATE_CONSTRAINT strict_delete {
    WHEN action == "DELETE"
    THEN user_level >= 4
  }
}

2. Verify & Test (CLI)

# Compile + Z3 formal verification
cslcore verify my_policy.csl

# Test a scenario
cslcore simulate my_policy.csl --input '{"action": "DELETE", "user_level": 2}'
# → BLOCKED: Constraint 'strict_delete' violated.

# Interactive REPL
cslcore repl my_policy.csl

3. Use in Python

from chimera_core import load_guard

guard = load_guard("my_policy.csl")

result = guard.verify({"action": "READ", "user_level": 1})
print(result.allowed)  # True

result = guard.verify({"action": "DELETE", "user_level": 2})
print(result.allowed)  # False

Benchmark: Adversarial Attack Resistance

We tested prompt-based safety rules vs CSL-Core enforcement across 4 frontier LLMs with 22 adversarial attacks and 15 legitimate operations:

Approach Attacks Blocked Bypass Rate Legit Ops Passed Latency
GPT-4.1 (prompt rules) 10/22 (45%) 55% 15/15 (100%) ~850ms
GPT-4o (prompt rules) 15/22 (68%) 32% 15/15 (100%) ~620ms
Claude Sonnet 4 (prompt rules) 19/22 (86%) 14% 15/15 (100%) ~480ms
Gemini 2.0 Flash (prompt rules) 11/22 (50%) 50% 15/15 (100%) ~410ms
CSL-Core (deterministic) 22/22 (100%) 0% 15/15 (100%) ~0.84ms

Why 100%? Enforcement happens outside the model. Prompt injection is irrelevant because there's nothing to inject against. Attack categories: direct instruction override, role-play jailbreaks, encoding tricks, multi-turn escalation, tool-name spoofing, and more.

Full methodology: `benchmarks/`


LangChain Integration

Protect any LangChain agent with 3 lines — no prompt changes, no fine-tuning:

from chimera_core import load_guard
from chimera_core.plugins.langchain import guard_tools
from langchain_classic.agents import AgentExecutor, create_tool_calling_agent

guard = load_guard("agent_policy.csl")

# Wrap tools — enforcement is automatic
safe_tools = guard_tools(
    tools=[search_tool, transfer_tool, delete_tool],
    guard=guard,
    inject={"user_role": "JUNIOR", "environment": "prod"},  # LLM can't override these
    tool_field="tool"  # Auto-inject tool name
)

agent = create_tool_calling_agent(llm, safe_tools, prompt)
executor = AgentExecutor(agent=agent, tools=safe_tools)

Every tool call is intercepted before execution. If the policy says no, the tool doesn't run. Period.

Context Injection

Pass runtime context that the LLM cannot override — user roles, environment, rate limits:

safe_tools = guard_tools(
    tools=tools,
    guard=guard,
    inject={
        "user_role": current_user.role,         # From your auth system
        "environment": os.getenv("ENV"),        # prod/dev/staging
        "rate_limit_remaining": quota.remaining # Dynamic limits
    }
)

LCEL Chain Protection

from chimera_co

Tools (2)

verifyCompiles and performs Z3 formal verification on a CSL policy file.
simulateTests a specific input scenario against a CSL policy file.

Configuration

claude_desktop_config.json
{"mcpServers": {"csl-core": {"command": "cslcore", "args": ["repl"]}}}

Try it

Verify the safety policy defined in my_policy.csl for logical consistency.
Simulate an action where a user with level 2 attempts to perform a DELETE operation using my_policy.csl.
Check if the current policy allows a READ action for a user with level 1.

Frequently Asked Questions

What are the key features of CSL-Core?

Deterministic safety enforcement outside the LLM. Z3 formal verification of safety policies. Adversarial attack resistance with 0% bypass rate. Runtime context injection that LLMs cannot override. Seamless integration with LangChain agents.

What can I use CSL-Core for?

Preventing unauthorized tool usage in AI agents. Enforcing strict role-based access control for LLM actions. Ensuring PII is never sent to external endpoints. Implementing deterministic rate limits for AI-driven tasks.

How do I install CSL-Core?

Install CSL-Core by running: pip install csl-core

What MCP clients work with CSL-Core?

CSL-Core works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, and other editors with MCP support.

Turn this server into reusable context

Keep CSL-Core docs, env vars, and workflow notes in Conare so your agent carries them across sessions.

Need the old visual installer? Open Conare IDE.
Open Conare