Multi-agent AI consultation framework for Claude Code via MCP
Claude Concilium
Multi-agent AI consultation framework for Claude Code via MCP.
Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.
Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
├── Gemini (gemini-cli) ─► Opinion B
│
└── Synthesis ◄── Consensus or iterate
The Problem
Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.
The Solution
Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).
Key features:
- Parallel consultation with 2+ AI agents
- Production-grade fallback chains with error detection
- Each MCP server works standalone or as part of Concilium
- Plug & play: clone,
npm install, add to.mcp.json
Architecture
┌─────────────────────────────────────────────────────────┐
│ Claude Code │
│ │
│ "Review this code for race conditions" │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Call #1 │ │ MCP Call #2 │ (parallel) │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │
└─────────┼──────────────────┼──────────────────────────────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ mcp-openai │ │ mcp-gemini │ Primary agents
│ (codex exec)│ │ (gemini -p) │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ OpenAI │ │ Google │ LLM providers
│ (OAuth) │ │ (OAuth) │
└──────────────┘ └──────────────┘
Fallback chain (on quota/error):
OpenAI → Qwen → DeepSeek
Gemini → Qwen → DeepSeek
Quickstart
1. Clone and install
git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium
# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..
# Verify all servers work (no CLI tools required)
node test/smoke-test.mjs
Expected output:
PASS mcp-openai (Tools: openai_chat, openai_review)
PASS mcp-gemini (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen (Tools: qwen_chat)
All tests passed.
2. Set up providers
Pick at least 2 providers:
| Provider | Auth | Free Tier | Setup |
|---|---|---|---|
| OpenAI | codex login (OAuth) |
ChatGPT Plus weekly credits | Setup guide |
| Gemini | Google OAuth | 1000 req/day | Setup guide |
| Qwen | OAuth or API key | Varies | Setup guide |
| DeepSeek | API key | Pay-per-use (cheap) | Setup guide |
3. Add to Claude Code
Copy config/mcp.json.example and update paths:
# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual path
Or add servers individually to your existing .mcp.json:
{
"mcpServers": {
"mcp-openai": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-openai/server.js"],
"env": {
"CODEX_HOME": "~/.codex-minimal"
}
},
"mcp-gemini": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
}
}
}
4. Install the skill (optional)
Copy the Concilium skill to your Claude Code commands:
cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md
Now use /ai-concilium in Claude Code to trigger a multi-agent consultation.
MCP Servers
Each server can be used ind
Tools (5)
openai_chatChat with OpenAI modelopenai_reviewReview code using OpenAIgemini_chatChat with Gemini modelgemini_analyzeAnalyze code using Geminiqwen_chatChat with Qwen modelEnvironment Variables
CODEX_HOMEPath to codex configurationConfiguration
{"mcpServers": {"mcp-openai": {"type": "stdio", "command": "node", "args": ["/absolute/path/to/servers/mcp-openai/server.js"], "env": {"CODEX_HOME": "~/.codex-minimal"}}, "mcp-gemini": {"type": "stdio", "command": "node", "args": ["/absolute/path/to/servers/mcp-gemini/server.js"]}}}