Self-hosted semantic layer for AI agents.
<source media="(prefers-color-scheme: dark)" srcset="./assets/banner-dark.png" />
<source media="(prefers-color-scheme: light)" srcset="./assets/banner-light.png" />
Self-hosted semantic layer for AI agents.
Docs · CLI · Discord · Website
Bonnard is an agent-native semantic layer — one set of metric definitions, every consumer (AI agents, apps, dashboards) gets the same governed answer. This repo is the self-hosted Docker deployment: run Bonnard on your own infrastructure with no cloud account needed.
Quick Start
# 1. Scaffold project
npx @bonnard/cli init --self-hosted
# 2. Configure your data source
# Edit .env with your database credentials
# 3. Start the server
docker compose up -d
# 4. Define your semantic layer
# Add cube/view YAML files to bonnard/cubes/ and bonnard/views/
# 5. Deploy models to the server
bon deploy
# 6. Verify your semantic layer
bon schema
# 7. Connect AI agents
bon mcp
Requires Node.js 20+ and Docker.
What's Included
- MCP server — AI agents query your semantic layer over the Model Context Protocol
- Cube semantic layer — SQL-based metric definitions with caching, access control, and multi-database support
- Cube Store — pre-aggregation cache for fast analytical queries
- Admin UI — browse deployed models, views, and measures at
http://localhost:3000 - Deploy API — push model updates via
bon deploywithout restarting containers - Health endpoint —
GET /healthfor uptime monitoring
Connecting AI Agents
Run bon mcp to see connection config for your setup. Examples below.
Claude Desktop / Cursor
{
"mcpServers": {
"bonnard": {
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}
Claude Code
{
"mcpServers": {
"bonnard": {
"type": "url",
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}
CrewAI (Python)
from crewai import MCPServerAdapter
mcp = MCPServerAdapter(
url="https://bonnard.example.com/mcp",
transport="streamable-http",
headers={"Authorization": "Bearer your-secret-token-here"}
)
Production Deployment
Authentication
Protect your endpoints by setting ADMIN_TOKEN in .env:
ADMIN_TOKEN=your-secret-token-here
All API and MCP endpoints will require Authorization: Bearer <token>. The /health endpoint remains open for monitoring.
Restart after changing .env:
docker compose up -d
TLS with Caddy
Caddy provides automatic HTTPS via Let's Encrypt.
Create a Caddyfile next to your docker-compose.yml:
bonnard.example.com {
reverse_proxy localhost:3000
}
Add Caddy to your docker-compose.yml:
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
restart: unless-stopped
Add the volume at the top level:
volumes:
models: {}
caddy_data: {}
Then remove the Bonnard port mapping (ports: - "3000:3000") since Caddy handles external traffic.
Deploy to a VM
# Copy project files to your server
scp -r . user@your-server:~/bonnard/
# SSH in and start
ssh user@your-server
cd ~/bonnard
docker compose up -d
Configuration
| Variable | Description | Default |
|---|---|---|
CUBEJS_DB_TYPE |
Database driver (postgres, duckdb, snowflake, bigquery, databricks, redshift, clickhouse) |
duckdb |
CUBEJS_DB_* |
Database connection settings (host, port, name, user, pass) | — |
CUBEJS_DATASOURCES |
Comma |
Environment Variables
ADMIN_TOKENAuthentication token for API and MCP endpointsCUBEJS_DB_TYPEDatabase driver (postgres, duckdb, snowflake, bigquery, databricks, redshift, clickhouse)CUBEJS_DB_*Database connection settings (host, port, name, user, pass)Configuration
{"mcpServers": {"bonnard": {"url": "https://bonnard.example.com/mcp", "headers": {"Authorization": "Bearer your-secret-token-here"}}}}